How often have you seen a writing assignment covered in red pen with numerous little symbols and codes highlighting comma splices or subject-verb disagreement. I am guilty of doing this myself (though on Google Docs, not with red pen). Most of us agree that content and organization are primary, but accuracy does play an important role in communication. So, we persist in written error correction. And students seem to persist in written errors. Are we wasting our time? Are students not taking our feedback into consideration? What is going on with written corrective feedback (for grammar) and how can this inform our teaching practice? This article is Part I in a two-part series on error correction. Part I looks at the lack of evidence that proves its effectiveness and why that may be so. Part II will look at a meta-analysis and describe the circumstances under which written corrective feedback is effective.
Ferris, D. R. (2004). The “grammar correction” debate in L2 writing: Where are we, and where do we go from here?(and what do we do in the meantime…?). Journal of Second Language Writing, 13(1), 49-62. http://www.sciencedirect.com/science/article/pii/S1060374304000086
Ferris writes this article prompted by a back-and-forth (in the literature) response to Truscott’s (1996) assertion that “error correction is harmful and should be abolished”. After examining the research on this subject, as well as conducting his own studies, Ferris comes to the following conclusions:
1. The Research on Error Correction Is Inadequate
The best way to test whether error correction is effective is by setting up simple correction/no correction treatment groups. If the group that received correction has more accurate writing, then we can say that error correction is helpful. However, Ferris found (in 2005) that few studies had actually done this. One reason is possibly the ethical dilemma in not providing error correction, which most researchers believe is beneficial. And, according to Ferris, the results of these six studies show positive effects, no effects, and one is inconclusive.
2. Research is Incomparable Because of Design Differences
Research on effective feedback has taken place in a variety of different settings, making it hard to generalize. However, Ferris argues that if research showed the same results under various conditions, that would be an argument for generalization. Yet, this is not the case. The results differ widely. Furthermore, there has been (as of 2005) no replication of previous studied.
3. Research Predicts Positive Effects
Despite the issues with the research, extant evidence certainly predicts (though does not prove) positive effects. Ferris, in summarizing the possible effects of corrective feedback, writes (p. 56):
- Adult acquirers may fossilize and not continue to make progress in accuracy of linguistic forms without explicit instruction and feedback on their errors.
- Students who receive feedback on their written errors will be more likely to self correct them during revision than those who receive no feedback—and this demonstrated uptake may be a necessary step in developing longer term linguistic competence.
- Students are likely to attend to and appreciate feedback on their errors, and this may motivate them both to make corrections and to work harder on improving their writing. The lack of such feedback may lead to anxiety or resentment, which could decrease motivation and lower confidence in their teachers.
Ferris states that, “at minimum it can be said that if the existing longitudinal studies do not reliably demonstrate the efficacy of error feedback, they certainly do not prove its uselessness, either” (p. 55). Here, she begins to focus more on what the research is lacking. This includes not only experimental designs and replication but also new ways to design research that attempts to answer the big questions directly. To avoid ethically dilemmas of withholding potential positive treatment, she gives an example design: two courses, taught by the same instructor. In one, grammar notes are given at the end of the text. In the other, in-text corrective feedback is given. She also argues that questions need to be more specific, looking at the effects of revision after corrective feedback, grammar-instruction combined with feedback, charting errors (e.g. in grammar logs), which types of errors, and how explicit.
So, what do we do?
Ferris claims that, as a research and teacher, intuition vs (lack) of evidence certainly makes the job of writing instruction more difficult, making her ask herself which should she do: stay rigid until evidence shows positive effects or work based on experiences, intuitions, and student desires. She recommends being careful and systematic in providing feedback as well as sensitive to not overwhelming or discouraging students. In addition, she also points out that error correction is not the only approach: consciousness-raising, grammar instruction, practice, accountability, and seeing editing opportunities as problem-solving opportunities.
Takeaway
I certainly understand the contention that Ferris feels. There has been a lack of evidence supporting corrective feedback. Anecdotal evidence reinforces this: how many times do I need to correct the same errors? Where is the uptake. However, without more research, it’s hard to say what is or is not effective. Should we keep trucking along with providing feedback, or should we wait for a complete picture, if ever a thing could be true. In my own opinion, we should provide feedback, but be more strategic about it: targeted at only a few issues, quantitatively less, and always as secondary to working with content, organization, structure, and critical thinking.
References
Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46, 327–369.
I have found Ferris’ unconditional defence of error correction (the very same arguments since her first study in 1995!!!) as biased as Truscott’s fierce criticism of it. My PhD study did find that error correction can work in reducing a number of mistakes (especially the ones who can be addressed through a simple heuristic) in a university context but that it must involve a lot of diagnostic work (to identify the causes of error), a lot of metacognition enhancement, much scaffolding (in the way of reminders and checklist), an incredible amount of time and effort on the part of the instructor and, most importantly, masses of learner intentionality. The student must WANT to eradicate errors.
What I find irritating about Ferris’ position is that she doesn’t consider the enormous amount of work that effective error correction entails. In theory, all instruction has the potential to be effective when you have a lot of time at your disposal to make it work. The crux of the matter is that busy teachers do not have the luxury to do what I could do on my very light timetable during my PhD Error Correction intervention. Error Correction like any other instruction, works when it allows for a lot of practice and recycling of the target items and when a student is ready developmentally to acquire them. Moreover, all of her studies’ findings have been extremely questionable in terms of external and internal validity.
Read this article of mine for more on this: https://gianfrancoconti.wordpress.com/2017/02/04/why-marking-your-students-books-should-be-the-least-of-your-priorities/
Hi Gianfranco,
Thanks for commenting. I don’t think Ferris has an “unconditional” defense, as the main point of this article is that error correction may work, but needs more research. That is clearly not 100% rousing support for it. Part II (out Monday) will shed some more light on this issue, and it directly addresses Ferris’ critiques of error correction research.
I think I completely agree with your error correction conditions – if you are going to make error correction effective, it is more than a red pen (or a Google Docs comment in my case). As for the research, I am not aware of any research designs that look at error correction + follow-up error treatment (something I do in my own practice), possibly because it could be difficult to separate out which variable is causing any effect, and because, as Ferris points out in the paper, denying such treatment could be construed as unethical.
While I agree with most of what you said, what evidence can you present for this claim: “Moreover, all of her studies’ findings have been extremely questionable in terms of external and internal validity.” All her studies? These are in well-respected peer-reviewed journals. Surely they have limitations, but how can you say that all her studies are questionable, especially without pointing out some of the flaws. I have not read all her studies, though I have read a number of them.
Would research into written feedback in UK state schools be useful?
Recently, schools have begun a practice where a teacher comments on work, the student replies, and the teacher replies to the reply. It’s gaining favour because of studies showing improvement and not needing any spending on materials.
There’s a backlash growing, because of criticisms of the studies used, and the time demands placed on teachers when using this.
(side note, I can’t seem to get logged in to comment via WordPress? Odd.)
Dialogic feedback sounds very interesting. Using Google Docs commenting feature, I have done it myself, though not often enough. You might want to consider Learner-Driven Feedback (search this site). Is the research that you are talking about on L1 or L2 students?
I also use the Google Docs comments, as it allows me to have a ‘conversation’ with a student about different issues in their writing, but at times that suit each of us.
My anecdotal experience is that the effectiveness of feedback largely depends on motivation. Last year, I was teaching 6th Formers preparing for the IELTS. Feedback was generally ignored until they wrote the exam and got a result far below their need–suddenly they wanted even more feedback and attempted to implement it!
The research I mentioned is among L1 students, for a variety of subjects but not necessarily language. Feedback from teachers I’ve heard from is that the students aren’t usually engaged in the process, so it can quickly become a tick-box exercise. Do you think there’s a difference in the value of error feedback for subjects vs languages?
I’ll have a look for the learner-driven items.
Metalanguage to describe the nature of ‘errors’ (is that the right broad term?) can be tricky, both for the rater/marker (‘Do I call this a style error or a grammar error or something else?’) and the receiver of the feedback (‘What’s ‘style’ here and how do I change it to the ‘correct’ style?).
Ursula Wingate (2012) looked at the potential metalanguage issue in some research she did on essay writing. One of here conclusions regarding rater feedback is that ‘unknown concepts are used to explain unknown concepts, and different labels are used for the same concepts (critically approach/evaluate).’
Similarly, Andy Adcroft (2011, working in a School of Law in Surrey University, talks about the ‘mythology’ of feedback and the dissonance which occurs: ‘academics and students have different perceptions of feedback and this creates dissonance as the two groups offer different interpretations of the same feedback events.’
References
Adcroft, A. (2011) The mythology of feedback. Higher Education Research and Development, 30(4), 405-419.
Wingate, U. (2012) ‘Argument!’ Helping students understand what essay writing is about. Journal of English for Academic Purposes, 11, 145-154.
That had come up as another issue with feedback: the different interpretations and lack of clarity on the part of the instructor (eg a comment on a paper that says “unclear” – why is it unclear?).
This is why proficiency plays a big role. In addition, some have done research on providing audio feedback instead of written feedback. I have enough done this, as I feel it is an added linguistic burden, but some report success. Most would recommend conferencing so that you can both explain feedback and answer questions.
Thanks for this. Really useful. I teach in a university so there’s plenty of written work to be corrected and I’ve often asked myself whether written feedback actually works. I think the issue might be lack of time to sit down with each student and really talk over the feedback and point out how they can improve. Another one might be motivation (or rather lack thereof) to improve.
I also run 1-1 tutorials for MA students who are writing their thesis where they get a chance to send in a few pages of their writing and come in for a f2f feedback session. I’ve noticed that these are incredibly beneficial and I’ve seen some great progress from one session to another. It might be because the students that come to those tutorials are already highly motivated and interested. They also receive quite a lot of personalised feedback. And the sessions last anything from 30 minutes to one hour, which gives you plenty of time to offer feedback.
Another situation where I noticed feedback definitely work is in group feedback sessions that we run for 1st and 2nd year BA students in various applied sciences. The way it works is that we correct their writing before the session and then we have a 2 hour feedback session during which they get a chance to ask questions about the corrections and to correct their work. So you’ve got plenty of time as a teacher during the session to walk around and help students improve.
The problem is, though, that the two options above are not really feasible in a general English or even general EAP course.
Thanks for commenting! I agree that it can be very beneficial, especially with advanced learners. It sounds like you have quite intense feedback sessions. Are you focusing solely on grammar, or also content, organization, etc? All the research I have read has been on grammar accuracy, so I wonder about the other aspects too.
It depends on the student and what they need help with. So I’ll focus on the most pressing issue. I really enjoy the 1-1 sessions. They give you the opportunity to really see student’s progress. You feel like the time you spent correcting was worth the while. Sometimes when teaching other courses, especially ones not focused on writing, I felt like all that effort put into giving feedback, highlighting mistakes, etc. was a bit of a waste of time. You know, especially when there’s no time to chat to the student, give them a chance to rewrite it for further feedback, etc.
In terms of improvement, I’ve seen students improve coherence and cohesion a lot from one session to the next. So feedback can definitely work,if the course allows you to spend sufficient time going over the mistakes and guiding the student.