bright
bright
bright
bright
bright
tl
 
tr
lunder
Recent Content of Assessing Writing
This site designed and maintained by
Prof. Glenn Fulcher

@languagetesting.info

runder
lcunder
rcunder
lnavl
Navigation
lnavr
navtop
lnavs
rnavs
lnavs
rnavs
lnavs
rnavs
lnavs
rnavs
lnavs
rnavs
bottomnav
 
 

 

 
Titles and Abstracts
     
 
ScienceDirect Publication: Assessing Writing

The TOEFL iBT writing: Korean students? perceptions of the TOEFL iBT writing test
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Eun-Young Julia Kim
The TOEFL is one of the most widely recognized language proficiency tests developed to measure international students? level of readiness for degree study. Whereas there exist a number of correlational studies conducted by various affiliates of ETS based on large-scale quantitative data, there is a dearth of studies that explore test-takers? perceptions and experiences concerning the TOEFL iBT. Writing skills have paramount importance for academic success, and high-stakes tests such as the TOEFL have a tendency to influence test-takers? perceptions on what defines good academic writing. To date, no research has specifically focused on test-takers? perceptions on the writing section of the TOEFL iBT. To fill this gap, this study explores Korean students? perceptions of effective strategies for preparing for the TOEFL iBT writing test, challenges they face in the test-taking and test-preparation processes, and implications such findings have for various stakeholders, by analyzing online forum data. Findings indicate that the scores for the writing section of the TOEFL iBT, albeit helpful for the initial benchmarking tool, may conceal more than it reveals about Korean students? academic writing ability. The study suggests that the format, questions, and the scoring of the TOEFL iBT writing test be critically examined from test-takers? perspectives.



Assessing peer and instructor response to writing: A corpus analysis from an expert survey
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Ian G. Anson, Chris M. Anson
Over the past 30 years, considerable scholarship has critically examined the nature of instructor response on written assignments in the context of higher education (see Straub, 2006). However, as Haswell (2008) has noted, less is currently known about the nature of peer response, especially as it compares with instructor response. In this study, we critically examine some of the properties of instructor and peer response to student writing. Using the results of an expert survey that provided a lexically-based index of high-quality response, we evaluate a corpus of nearly 50,000 peer responses produced at a four-year public university. Combined with the results of this survey, a large-scale automated content analysis shows first that instructors have adopted some of the field's lexical estimation of high-quality response, and second that student peer response reflects the early acquisition of this lexical estimation, although at further remove from their instructors. The results suggest promising directions for the parallel improvement of both instructor and peer response.



Understanding university students? peer feedback practices in EFL writing: Insights from a case study
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Shulin Yu, Guangwei Hu
While research on peer feedback in the L2 writing classroom has proliferated over the past three decades, only limited attention has been paid to how students respond to their peers? writing in specific contexts and why they respond in the ways they do. As a result, much remains to be known about how individual differences and contextual influences shape L2 students? peer feedback practices. To bridge the research gap, this case study examines two Chinese EFL university students? peer feedback practices and the factors influencing their feedback practices. Analyses of multiple sources of data including interviews, video recordings of peer feedback sessions, stimulated recalls, and texts reveal that the students took markedly different approaches when responding to their peers? writing. The findings also indicate that their peer feedback practices were situated in their own distinct sociocultural context and mediated by a myriad of factors including beliefs and values, motives and goals, secondary school learning and feedback experience, teacher feedback practices, feedback training, feedback group dynamics, as well as learning and assessment culture.



Evaluating rater accuracy and perception for integrated writing assessments using a mixed-methods approach
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Jue Wang, George Engelhard, Kevin Raczynski, Tian Song, Edward W. Wolfe
Integrated writing (IW) assessments underscore the connections between reading comprehension and writing skills. These assessments typically include rater-mediated components. Our study identified IW type essays that are difficult-to-score accurately, and then investigated reasons based on rater perceptions and judgments. Our data based on IW assessments are used as formative assessments designed to provide information on the developing literacy of students. We used a mixed- methods approach with rater accuracy defined quantitatively based on Rasch measurement theory, and a survey-based qualitative method designed to investigate rater perceptions and judgments toward student essays within the context of IW assessments. The quantitative analyses suggest that the essays and raters vary along a continuum designed to represent rating accuracy. The qualitative analyses suggest that raters had inconsistent perceptions toward certain features of essays compared to the experts, such as the amount of textual borrowing, the development of ideas, and the consistency of the focus. The implications of this study for research and practice of IW assessments are discussed.



Similarities and differences in constructs represented by U.S. States? middle school writing tests and the 2007 national assessment of educational progress writing assessment
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Ya Mo, Gary A. Troia
Little is known regarding the underlying constructs of writing tests used by U.S. state education authorities and national governments to evaluate the writing performance of their students, especially in middle school grades. Through a content analysis of 78 prompts and 35 rubrics from 27 states? middle school writing assessments from 2001 to 2007, and three representative prompts and rubrics from the United States? 2007 National Assessment of Educational Progress (NAEP) writing test, this study illuminates the writing constructs underlying large-scale writing assessments through examination of features in prompts and rubrics and investigation of the connections between prompts and rubrics in terms of genre demands. We found the content of state writing assessments and the NAEP align with respect to measurement parameters associated with (a) emphasis on writing process, audience awareness, and topic knowledge, (b) availability of procedural facilitators (e.g., checklists, rubrics, dictionaries) to assist students in their writing, and (c) inclusion of assessment criteria focused on organization, structure, content, details, sentence fluency, semantics, and general conventions. However, the NAEP?s writing assessment differs from many state tests of writing by including explicit directions for students to review their writing, giving students two timed writing tasks rather than one, making informational text production one of the three genres assessed, and including genre-specific evaluative components in rubrics. This study contributes to our understanding of the direction and path that large-scale writing assessments in the US are taking and how writing assessments are continually evolving.



Ed.Board/Aims and scope
Publication date: April 2017
Source:Assessing Writing, Volume 32





Placement of multilingual writers: Is there a role for student voices?
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Dana R. Ferris, Katherine Evans, Kendon Kurzer
Directed Self-Placement (DSP) is one placement model that has been implemented in various composition programs in the U.S. but has yet to be investigated thoroughly in second language writing settings. Central to DSP is the belief that, if students are given agency to help determine their educational trajectory, they will be empowered and more motivated to succeed (Crusan, 2011; Royer & Gilles, 1998). In this study, 1067 university L2 students completed both a voluntary self-assessment survey and the locally administered placement examination. We statistically compared the students? placement exam scores and their responses to the final question as to which level of a four-course writing program they thought would best meet their needs. We also examined a stratified random sample of 100 students? standardized test scores to see if there was a statistical relationship between those tests, our locally designed and administered placement test, and students? own self-placement scores. We conclude that student self-assessment might have a legitimate role in our placement process, but it probably cannot be used by itself to accurately place large numbers of multilingual students into a four-level sequence.



Improvement of writing skills during college: A multi-year cross-sectional and longitudinal study of undergraduate writing performance
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Daniel Oppenheimer, Franklin Zaromb, James R. Pomerantz, Jean C. Williams, Yoon Soo Park
We examined persuasive and expository writing samples collected from more than 300 college students as part of a nine-year cross-sectional and longitudinal study of undergraduate writing performance, conducted between 2000 and 2008. Using newly developed scoring rubrics, longitudinal analyses of writing scores revealed statistically significant growth in writing performance over time. These findings held for both persuasive and expository writing. Although writing performance was better among women than men, and better among students majoring in the humanities and social sciences than in natural sciences and engineering, neither women nor humanities and social science majors showed differential improvement over time from freshman to senior year. Our findings showed reliable increases in writing performance during a student?s college years, and moreover demonstrated that such longitudinal changes can be effectively measured. We call for more such outcome assessment in higher education as an essential tool to enhance student learning.



To make a long story short: A rubric for assessing graduate students? academic and popular science writing skills
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Tzipora Rakedzon, Ayelet Baram-Tsabari
Graduate students are future scientists, and as such, being able to communicate science is imperative for their integration into the scientific community. This is primarily achieved through scientific papers, mostly published in English; however, interactions outside of academia are also beneficial for future scientists. Therefore, academic writing courses are prevalent and popular science communication courses are on the rise. Nevertheless, no rubrics exist for assessing students' writing in academic and science communication courses. This article describes the development and testing of a rubric for assessing advanced L2 STEM graduate students? writing in academic (abstract) and popular science writing (press release). The rubric was developed as part of a longstanding academic writing course, but was modified to include a module on science communication with the lay public. Analysis of student needs and the literature inspired a pre-pilot that assessed 16 descriptors on 60 student works. A subsequent, adjusted pilot version on 30 students resulted in adaptations to fit each genre and course goals. In the third round, a modified, final rubric tested on 177 graduate students was created that can be used for both assessment and comparison of the genres. This rubric can assess scientific genres at the graduate level and can be adapted for other genres and levels.



Checking assumed proficiency: Comparing L1 and L2 performance on a university entrance test
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Bart Deygers, Kris Van den Branden, Elke Peters
This study compares the results of three groups of participants on the writing component of a centralised L2 university entrance test at the B2 level in Flanders, Belgium. The study investigates whether all Flemish candidates have a B2-level in Dutch upon university entrance, and whether L1 test takers outperform L2 candidates who learned Dutch at home or in Flanders. The results show that, even though the Flemish group outperformed both groups of L2 candidates, not all Flemish candidates reached the B2 level. Additionally, the study compares the results of two groups of L2 users on the same test and shows that candidates who studied Dutch in a Dutch-speaking context do not necessarily outscore candidates who did not. The primary methods of analysis include non-parametric regression and Multi-Faceted Rasch. The results are interpreted in terms of Hulstijn?s conceptualisation of Higher Language Competence, and the study abroad literature. Implications for the university entrance policy are discussed at the end of the paper.



The effectiveness of instructor feedback for learning-oriented language assessment: Using an integrated reading-to-write task for English for academic purposes
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Ah-Young (Alicia) Kim, Hyun Jung Kim
Learning-oriented language assessment (LOLA) can be effective in promoting learning through assessment by creating a link between the two. Although previous studies have examined the effectiveness of feedback ? a major element of LOLA ? in L2 writing, few have examined how LOLA could be implemented using an integrated reading-to-write task in English for academic purpose (EAP) contexts, which was the objective of this study. Participants were ten Korean TESOL graduate students taking a research methods course and their professor. During a seven-week period, each student completed a weekly integrated reading-to-write task as part of their classroom assessment ? they read an academic research paper on a topic of their choice and wrote a review on it. After receiving feedback from the instructor, students revised their work and resubmitted it the following week. Students and the instructor also participated in a semi-structured interview to discuss the effectiveness of learning-oriented feedback on academic reading-to-write tasks. Learners displayed varying developmental patterns, with some students showing more improvement than others. The findings highlighted two participants? progress in the content domain. Qualitative analysis results suggest that the students reacted differently to the instructor feedback, leading to varying degrees of writing enhancement. The results provide pedagogical implications for using integrated academic reading-to-write tasks and sustained feedback for LOLA.



Textual voice elements and voice strength in EFL argumentative writing
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Hyung-Jo Yoon
This study examined how the quantity and diversity of textual voice elements contribute to holistic voice strength and essay quality. For the quantification of voice elements, this study used an automated processing tool, the Authorial Voice Analyzer (AVA), which was developed based on categories from Hyland?s voice model (i.e., hedges, boosters, attitude markers, self-mentions, reader pronouns, and directives). To explore the relationship between textual voice elements and holistic voice strength, as well as between voice elements and essay quality, this study analyzed 219 argumentative essays written by L1 Greek-speaking EFL students. The results suggested positive, but weak to moderate, correlations between textual voice and holistic voice strength; a regression model with three textual voice features explained 26% of the variance in voice strength scores. The results also indicated weak correlations between textual voice and essay quality. Interestingly, the textual voice features contributing to voice strength (boosters, attitude markers, and self-mentions) were different from those contributing to essay quality (hedges). Interpreting these findings in relation to the context (timed argumentative writing in an EFL context), this study suggests implications for L2 writing assessment and pedagogy.



Validation of a locally created and rated writing test used for placement in a higher education EFL program
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Robert C. Johnson, A. Mehdi Riazi
This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution?s decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models ? Kane?s (1992, 1994) interpretive model and Bachman?s (2005) and Bachman and Palmer?s (2010) assessment use argument ? was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ?road maps? for immediate opportunities to improve both testing and decisions made based on testing.



First Year University Writing: A Corpus Based Study with Implications for Pedagogy, L. Aull. Palgrave Macmillan (2015), 239
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Mark Chapman




Are TOEFL iBT® writing test scores related to keyboard type? A survey of keyboard-related practices at testing centers
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Guangming Ling
The strength of a computer-based writing test, such as the TOEFL iBT ® Writing Test, lies in its capability to assess all examinees under the same conditions so that scores reflect the targeted writing abilities rather than differences in testing conditions, such as types of keyboards. The familiarity and proficiency examinees have with a specific type of keyboard could affect their efficiency in writing essays and introduce construct-irrelevant variance, although little research is available in the literature. To explore this, we surveyed 2214 TOEFL iBT testing centers in 134 countries on practices related to keyboard type and analyzed the centers? responses and the TOEFL iBT scores of examinees from these centers. Results revealed that (a) most testing centers used the U.S. standard English keyboard (USKB) for the test, but a small proportion of centers used a country-specific keyboard (CSKB) after being converted to the USKB; (b) TOEFL iBT Writing scores appear to be significantly associated with the types of keyboard and overlay in only 10 countries, with trivial or small score differences associated with keyboard type. These findings suggest that the current practices related to keyboard type appear to have no or little practical effect on examinees? TOEFL iBT Writing scores.