bright
bright
bright
bright
bright
tl
 
tr
lunder
Recent Content of Assessing Writing
This site designed and maintained by
Prof. Glenn Fulcher

@languagetesting.info

runder
lcunder
rcunder
lnavl
Navigation
lnavr
navtop
lnavs
rnavs
lnavs
rnavs
lnavs
rnavs
lnavs
rnavs
lnavs
rnavs
bottomnav
 
 

 

 
Titles and Abstracts
     
 
ScienceDirect Publication: Assessing Writing

The TOEFL iBT writing: Korean students? perceptions of the TOEFL iBT writing test
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Eun-Young Julia Kim
The TOEFL is one of the most widely recognized language proficiency tests developed to measure international students? level of readiness for degree study. Whereas there exist a number of correlational studies conducted by various affiliates of ETS based on large-scale quantitative data, there is a dearth of studies that explore test-takers? perceptions and experiences concerning the TOEFL iBT. Writing skills have paramount importance for academic success, and high-stakes tests such as the TOEFL have a tendency to influence test-takers? perceptions on what defines good academic writing. To date, no research has specifically focused on test-takers? perceptions on the writing section of the TOEFL iBT. To fill this gap, this study explores Korean students? perceptions of effective strategies for preparing for the TOEFL iBT writing test, challenges they face in the test-taking and test-preparation processes, and implications such findings have for various stakeholders, by analyzing online forum data. Findings indicate that the scores for the writing section of the TOEFL iBT, albeit helpful for the initial benchmarking tool, may conceal more than it reveals about Korean students? academic writing ability. The study suggests that the format, questions, and the scoring of the TOEFL iBT writing test be critically examined from test-takers? perspectives.



Assessing peer and instructor response to writing: A corpus analysis from an expert survey
Publication date: July 2017
Source:Assessing Writing, Volume 33
Author(s): Ian G. Anson, Chris M. Anson
Over the past 30 years, considerable scholarship has critically examined the nature of instructor response on written assignments in the context of higher education (see Straub, 2006). However, as Haswell (2008) has noted, less is currently known about the nature of peer response, especially as it compares with instructor response. In this study, we critically examine some of the properties of instructor and peer response to student writing. Using the results of an expert survey that provided a lexically-based index of high-quality response, we evaluate a corpus of nearly 50,000 peer responses produced at a four-year public university. Combined with the results of this survey, a large-scale automated content analysis shows first that instructors have adopted some of the field's lexical estimation of high-quality response, and second that student peer response reflects the early acquisition of this lexical estimation, although at further remove from their instructors. The results suggest promising directions for the parallel improvement of both instructor and peer response.



Placement of multilingual writers: Is there a role for student voices?
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Dana R. Ferris, Katherine Evans, Kendon Kurzer
Directed Self-Placement (DSP) is one placement model that has been implemented in various composition programs in the U.S. but has yet to be investigated thoroughly in second language writing settings. Central to DSP is the belief that, if students are given agency to help determine their educational trajectory, they will be empowered and more motivated to succeed (Crusan, 2011; Royer & Gilles, 1998). In this study, 1067 university L2 students completed both a voluntary self-assessment survey and the locally administered placement examination. We statistically compared the students? placement exam scores and their responses to the final question as to which level of a four-course writing program they thought would best meet their needs. We also examined a stratified random sample of 100 students? standardized test scores to see if there was a statistical relationship between those tests, our locally designed and administered placement test, and students? own self-placement scores. We conclude that student self-assessment might have a legitimate role in our placement process, but it probably cannot be used by itself to accurately place large numbers of multilingual students into a four-level sequence.



Improvement of writing skills during college: A multi-year cross-sectional and longitudinal study of undergraduate writing performance
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Daniel Oppenheimer, Franklin Zaromb, James R. Pomerantz, Jean C. Williams, Yoon Soo Park
We examined persuasive and expository writing samples collected from more than 300 college students as part of a nine-year cross-sectional and longitudinal study of undergraduate writing performance, conducted between 2000 and 2008. Using newly developed scoring rubrics, longitudinal analyses of writing scores revealed statistically significant growth in writing performance over time. These findings held for both persuasive and expository writing. Although writing performance was better among women than men, and better among students majoring in the humanities and social sciences than in natural sciences and engineering, neither women nor humanities and social science majors showed differential improvement over time from freshman to senior year. Our findings showed reliable increases in writing performance during a student?s college years, and moreover demonstrated that such longitudinal changes can be effectively measured. We call for more such outcome assessment in higher education as an essential tool to enhance student learning.



To make a long story short: A rubric for assessing graduate students? academic and popular science writing skills
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Tzipora Rakedzon, Ayelet Baram-Tsabari
Graduate students are future scientists, and as such, being able to communicate science is imperative for their integration into the scientific community. This is primarily achieved through scientific papers, mostly published in English; however, interactions outside of academia are also beneficial for future scientists. Therefore, academic writing courses are prevalent and popular science communication courses are on the rise. Nevertheless, no rubrics exist for assessing students' writing in academic and science communication courses. This article describes the development and testing of a rubric for assessing advanced L2 STEM graduate students? writing in academic (abstract) and popular science writing (press release). The rubric was developed as part of a longstanding academic writing course, but was modified to include a module on science communication with the lay public. Analysis of student needs and the literature inspired a pre-pilot that assessed 16 descriptors on 60 student works. A subsequent, adjusted pilot version on 30 students resulted in adaptations to fit each genre and course goals. In the third round, a modified, final rubric tested on 177 graduate students was created that can be used for both assessment and comparison of the genres. This rubric can assess scientific genres at the graduate level and can be adapted for other genres and levels.



Checking assumed proficiency: Comparing L1 and L2 performance on a university entrance test
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Bart Deygers, Kris Van den Branden, Elke Peters
This study compares the results of three groups of participants on the writing component of a centralised L2 university entrance test at the B2 level in Flanders, Belgium. The study investigates whether all Flemish candidates have a B2-level in Dutch upon university entrance, and whether L1 test takers outperform L2 candidates who learned Dutch at home or in Flanders. The results show that, even though the Flemish group outperformed both groups of L2 candidates, not all Flemish candidates reached the B2 level. Additionally, the study compares the results of two groups of L2 users on the same test and shows that candidates who studied Dutch in a Dutch-speaking context do not necessarily outscore candidates who did not. The primary methods of analysis include non-parametric regression and Multi-Faceted Rasch. The results are interpreted in terms of Hulstijn?s conceptualisation of Higher Language Competence, and the study abroad literature. Implications for the university entrance policy are discussed at the end of the paper.



The effectiveness of instructor feedback for learning-oriented language assessment: Using an integrated reading-to-write task for English for academic purposes
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Ah-Young (Alicia) Kim, Hyun Jung Kim
Learning-oriented language assessment (LOLA) can be effective in promoting learning through assessment by creating a link between the two. Although previous studies have examined the effectiveness of feedback ? a major element of LOLA ? in L2 writing, few have examined how LOLA could be implemented using an integrated reading-to-write task in English for academic purpose (EAP) contexts, which was the objective of this study. Participants were ten Korean TESOL graduate students taking a research methods course and their professor. During a seven-week period, each student completed a weekly integrated reading-to-write task as part of their classroom assessment ? they read an academic research paper on a topic of their choice and wrote a review on it. After receiving feedback from the instructor, students revised their work and resubmitted it the following week. Students and the instructor also participated in a semi-structured interview to discuss the effectiveness of learning-oriented feedback on academic reading-to-write tasks. Learners displayed varying developmental patterns, with some students showing more improvement than others. The findings highlighted two participants? progress in the content domain. Qualitative analysis results suggest that the students reacted differently to the instructor feedback, leading to varying degrees of writing enhancement. The results provide pedagogical implications for using integrated academic reading-to-write tasks and sustained feedback for LOLA.



Textual voice elements and voice strength in EFL argumentative writing
Publication date: April 2017
Source:Assessing Writing, Volume 32
Author(s): Hyung-Jo Yoon
This study examined how the quantity and diversity of textual voice elements contribute to holistic voice strength and essay quality. For the quantification of voice elements, this study used an automated processing tool, the Authorial Voice Analyzer (AVA), which was developed based on categories from Hyland?s voice model (i.e., hedges, boosters, attitude markers, self-mentions, reader pronouns, and directives). To explore the relationship between textual voice elements and holistic voice strength, as well as between voice elements and essay quality, this study analyzed 219 argumentative essays written by L1 Greek-speaking EFL students. The results suggested positive, but weak to moderate, correlations between textual voice and holistic voice strength; a regression model with three textual voice features explained 26% of the variance in voice strength scores. The results also indicated weak correlations between textual voice and essay quality. Interestingly, the textual voice features contributing to voice strength (boosters, attitude markers, and self-mentions) were different from those contributing to essay quality (hedges). Interpreting these findings in relation to the context (timed argumentative writing in an EFL context), this study suggests implications for L2 writing assessment and pedagogy.



Ed.Board/Aims and scope
Publication date: January 2017
Source:Assessing Writing, Volume 31





Are TOEFL iBT® writing test scores related to keyboard type? A survey of keyboard-related practices at testing centers
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Guangming Ling
The strength of a computer-based writing test, such as the TOEFL iBT ® Writing Test, lies in its capability to assess all examinees under the same conditions so that scores reflect the targeted writing abilities rather than differences in testing conditions, such as types of keyboards. The familiarity and proficiency examinees have with a specific type of keyboard could affect their efficiency in writing essays and introduce construct-irrelevant variance, although little research is available in the literature. To explore this, we surveyed 2214 TOEFL iBT testing centers in 134 countries on practices related to keyboard type and analyzed the centers? responses and the TOEFL iBT scores of examinees from these centers. Results revealed that (a) most testing centers used the U.S. standard English keyboard (USKB) for the test, but a small proportion of centers used a country-specific keyboard (CSKB) after being converted to the USKB; (b) TOEFL iBT Writing scores appear to be significantly associated with the types of keyboard and overlay in only 10 countries, with trivial or small score differences associated with keyboard type. These findings suggest that the current practices related to keyboard type appear to have no or little practical effect on examinees? TOEFL iBT Writing scores.



How students' ability levels influence the relevance and accuracy of their feedback to peers: A case study
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Ivan Chong
Traditionally, teachers play a central role in creating a learning environment that favors the implementation of peer assessment in writing. Nevertheless, students? writing ability and how it factors into students? provision of relevant (content-related) and accurate (language-related) written feedback is not considered. This is due to the fact that most studies about peer assessment were conducted in a tertiary setting and researchers assume university students have attained a basic level of cognitive and linguistic developments that would empower them to make judgments about their peers? work. The present study, which was conducted in a Hong Kong secondary school, investigated this research gap by analyzing first drafts produced by a class of 16 Secondary 1 (Grade 7) students in a writing unit. The first section of the study reports students? writing abilities in terms of content development and linguistic accuracy; findings in the subsequent section suggest that there is a strong and positive relationship between students? writing abilities and the relevance and accuracy of their written feedback. This paper ends with two pedagogical implications for implementing peer assessment: Alignment with pre-writing instruction and the development of marking focuses based on students? abilities.



K-12 multimodal assessment and interactive audiences: An exploratory analysis of existing frameworks
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Ewa McGrail, Nadia Behizadeh
Multimodal writing today often occurs through membership in an online, participatory culture; thus, based on affordances of online compositions, the audience for student writers has shifted from imagined readers to actual, accessible readers and responders. Additionally, recent content and technology standards for students in US schools emphasize the importance of distributing multimodal compositions to wider audiences. In this article, we closely examine attention to interactive audience and collaboration in a purposive sample of kindergarten through 12th grade (K-12) assessment frameworks, as well as how these frameworks define multimodal composition. We found that multimodal composition is being defined consistently across all frameworks as composition that includes multiple ways of communicating. However, many multimodal composition examples were texts that were non-interactive composition types even though many authors acknowledged the emergence of interactive online composition types that afford the writer the ability to communicate and collaborate with an audience. In addition, the frameworks reviewed tended to focus on the final product and less often on the process or dynamic collaboration with the audience. In the discussion, implications for classroom teachers as well as considerations for researchers exploring the construct of online multimodal writing are offered.



Responding to student writing online: Tracking student interactions with instructor feedback in a Learning Management System
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Angela Laflen, Michelle Smith
Instructor response to student writing increasingly takes place within Learning Management Systems (LMSs), which often make grades visible apart from instructor feedback by default. Previous studies indicate that students generally ascribe more value to grades than to instructor feedback, while instructors believe that feedback is most important. This study investigated how students interact with an LMS interface?an instance of Sakai?to access instructor feedback on their writing. Our blind study analyzed data from 334 students in 16 courses at a medium, comprehensive private college to investigate the question: Does the rate at which students open attachments with instructor feedback differ if students can see their grades without opening the attachment? We compared two response methodologies: mode 1 made grades visible apart from feedback, and mode 2 required students to open attached feedback files to find their grades. The data for each mode was collected automatically by the LMS, retrieved, and retrospectively analyzed. The results show that making grades visible separate from feedback significantly reduced the rate at which students opened instructor feedback files and that timing also impacted students? rate of access. These findings provide the basis for empirically informed best practices for grading and returning papers online.



?I feel disappointed?: EFL university students? emotional responses towards teacher written feedback
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Omer Hassan Ali Mahfoodh
Studies on teacher written feedback in Second Language (L2) contexts have not given adequate attention to learners? emotional responses towards teacher written feedback. Thus, this study examined the relationship between emotional responses of EFL university students towards teacher written feedback and students? success of revisions. Data were collected using think-aloud protocols, students? written texts, and semi-structured interviews. To obtain students? emotional responses towards teacher written feedback, grounded theory was employed to analyse think-aloud protocols and semi-structured interviews. Teacher written feedback was tabulated and categorised using a coding scheme which was developed based on Straub and Lunsford (1995) and Ferris (1997). Students? success of revisions was analysed using an analytical scheme based on Conrad and Goldstein (1999). The results revealed that EFL university students? emotional responses include acceptance of feedback, rejection of feedback, surprise, happiness, dissatisfaction, disappointment, frustration, and satisfaction. Some emotional responses could be attributed to harsh criticism, negative evaluation, and miscommunication between teachers and their students. The study also revealed that emotional responses can affect students? understanding and utilisation of teacher written feedback.
Graphical abstract


Voice in timed L2 argumentative essay writing
Publication date: January 2017
Source:Assessing Writing, Volume 31
Author(s): Cecilia Guanfang Zhao
The concept of voice is included in various writing textbooks, learning standards, and assessment rubrics, indicating the importance of this element in writing instruction and assessment at both secondary and postsecondary levels. Researchers in second language (L2) writing, however, often debate the importance of voice in L2 writing. Due to the elusiveness of this concept, much of such debate is still at the theoretical level; few empirical studies exist that provide solid evidence to either support or refute the proposition that voice is an important concept to teach in L2 writing classrooms. To fill this gap, the present study empirically investigated the relationship between voice salience, as captured by an analytic rubric, and official TOEFL iBT argumentative essay scores in 200 timed L2 essays. Results showed that voice was a significant predictor of TOEFL essay scores, explaining about 25% of the score variances. Moreover, while each individual voice dimension was found to be strongly or moderately correlated with essay scores when examined in isolation, only the ideational dimension became a significant predictor of text quality, when the effect of other dimensions was controlled for. Implications of such results for L2 writing instruction are discussed.