Re-Viewing Peer Review

Flynn, Elizabeth A.

Re-reading and reflecting on an essay published 27 years ago was a startling experience. With nothing but hindsight, I experienced surprise and a concern that my tone sounded too assured (those essays that I quoted didn’t sound nearly as weak as I made them out to be). A lot has changed since “Students as Readers of Their Classmates’ Writing: Some Implications for Peer Critiquing” was published in 1984. What about gender? race? class? ethnicity? transnationalism? Were students really as incapable of successfully reading their classmates’ essays as I made them out to be? Are students more successful peer critiquers when they are upper classpeople rather than first-year students? Here I will briefly summarizing the original essay, describe my subsequent experience with peer review in my own classes, and discuss some recent research on peer review, focusing especially on contexts that might not have been predicted in 1984—teaching English as a second language, teaching English as a second language with technology, and teaching English as a first language with technology.

I began “Students as Readers of Their Classmates’ Writing: Some Implications for Peer Critiquing” by mentioning the enthusiasm of advocates of peer review such as Kenneth Bruffee, Thom Hawkins, Peter Elbow, and James Moffett but observed that their endorsements are often not backed up by empirical evidence. I then discussed a moment of crisis when I attempted to provide some evidence of its effectiveness and started looking closely at drafts, revisions, and critique sheets of students in a first-year English class. I found that students revised by attending to surface errors or stylistic features and less often re-conceptualized or substantially revised their essays. One reason no doubt was that the feedback they were receiving from their peers was not as helpful as it should have been. Too often peer reviewers reacted enthusiastically to essays that lacked focus or were poorly organized. I claimed in “Students as Readers” that “If students sensed that a classmate had done some research or knew a lot about a subject, they tended to compensate for the limitations of the paper by identifying its deep structure rather than commenting on incoherence closer to the surface” (122). I continued, “In other words, they read papers which mimicked the style of professional writing as if it were professional writing. The result was that they gave the writers of such papers very little useful feedback” (122). “They created coherence out of incoherence” (124). I then provided two examples of this problem and explained it by discussing the work of reading theorist Frank Smith who, in Understanding Reading, emphasizes the importance of prior knowledge in the reading experience. I argued that students’ prior reading consists largely of texts written by professionals that are reasonably well-organized and focused and that students most often read “to absorb rather than to evaluate” (127). I concluded that students must be trained to recognize incoherence, perhaps by providing them with critique sheets that ask them explicitly to point out gaps, inconsistencies, and irrelevancies (127). Teachers might also provide students with examples of the genre of the student essay, I suggested, and point out some limitations of student writing (127). I then called for more research on peer evaluation (127).

I still use peer critiquing in most of my undergraduate literature and writing classes. Students often comment in their concluding portfolio analyses that they appreciate having an opportunity to get feedback on their essays before submitting them for a grade, and sometimes they note that critiquing others’ papers also improves their writing. The critique sheet that I often use is filled out and discussed in class and necessitates that peer critics identify strengths as well as limitations in areas such as focus, development, transitional sentences, and introduction and conclusion. Usually the process is more helpful with upper-class students. They are better able to identify problems and better able to revise on the basis of these comments.

I will admit to having lost track of work in composition studies on peer review and was surprised to discover, when I did a search in preparation for this essay, that research on the topic arising out mainstream composition studies, for the most part, tapered off in the early 1990s and was replaced, within composition studies, by research focusing on peer review using technology. Essays on peer evaluation (or peer review, peer critique, or peer response) in College Composition and Communication, for instance, included an article published in the 1960s, two in the 1970s, eight in the 1980s, and four in the early 1990s. Articles published in College English included three in the 1950s and three in the 1970s. I also discovered a different strand of research, peer review within the context of teaching English as a second language (ESL). Sometimes this work on ESL also included discussion of peer review using technology. Often this work was published in journals that I was unfamiliar with. The MLA International Bibliography identified 185 items from a wide variety of journals when I searched under “peer evaluation” in late May of this year. The oldest was a 1951 article published in College English and the most recent was a discussion of two approaches to teaching English as a second language written by two Iranians writing from Iran and published in the journal Innovation in Language Learning and Teaching in March of 2011. Most of the recent articles on peer evaluation dealt with teaching English as a second language either in the U.S. or elsewhere or using computer-assisted technology in the process. Some combined the two.

One exception was Jane Van Slembrouck’s “Watch and Learn: Peer Evaluation and Tutoring Pedagogy” published in Praxis: A Writing Center Journal in 2010. Slembrouck’s article arises out of a branch of mainstream composition studies, writing center studies, and is an endorsement of a process approach to tutoring including peer evaluation. This is not surprising given that writing centers have generally supported a process approach for decades. Slembrouck states, “I have seen that genuinely productive assessment can occur between equals and that observing a peer is inevitably a reciprocal process, prompting meditation on one’s own values and practices.” Her study consisted of having tutors observe the sessions of other tutors, who filled out a questionnaire on the basis of their observations that included items such as the apparent goals of the student and tutor, the demeanor of each, as well as strategies that were effective or ineffective. The observers also had follow-up conversations with their peers. Slembrouck concludes that most of the tutors regarded the experience as instructive and sometimes even enjoyable. One tutor observes: “In the end, observing Victor was probably more useful for my own sessions than having someone observe me and then telling me what they thought.” Slembrouck concludes, “As a component of tutor assessment, peer observation is not a departure from the productive network of dialogue that makes up a bustling writing center so much as an attempt to focus in closely, see and appreciate it.”

Peer Evaluation within ESL Face-to-Face Contexts

Of the eight essays on peer evaluation published in 2010 and 2011 and reported in the MLA International Bibliography in late May, Slembrouck’s was the only one to arise out of mainstream composition studies and one of two that was unequivocally supportive of peer evaluation. It was the only one, as well, that did not make use of a social scientific research design and one of only three that was set entirely in a U.S. context. Six of the eight reported research results of using peer evaluation in ESL contexts, computer-mediation contexts, or both. The six ranged from qualified endorsements of peer evaluation to demonstrations of its ineffectiveness.

Dana R. Ferris’s Response to Student Writing: Implications for Second Language Students (2003) describes a shift within ESL research and pedagogy away from peer evaluation and toward more traditional approaches. She initially discusses some ways in which mainstream composition research focusing on English language users (L1) influenced research on English as a Second Language (L2). Ferris observes that “As L2 writing specialists began to embrace the process approach, the implementation of peer response in ESL writing classes was rapid and widespread, especially in the United States” (69). According to Ferris, enthusiasm began to lessen, however, in the 1990s as research began to suggest that peer feedback was ineffective in helping students revise and improve their writing and perhaps inappropriate given “students’ range of cultural norms and expectations about group dynamics, the role of the teacher, and face-saving” (69). Ferris concludes, “Still, there exists to this day considerable ambivalence among L2 writing instructors and scholars about whether peer feedback does more good than harm and whether its benefits justify the time required to utilize it effectively” (70).

Three essays on peer review published in 2010 or 2011 had an ESL emphasis and in some ways reflected the ambivalence that Ferris mentions. I’ll discuss three additional articles dealing with ESL students when I consider computer-mediated peer review. Of the three articles on the merits of using face-to-face, i.e., non-computer-mediated, peer review in ESL, only one clearly demonstrates its value. In “Effects of Peer- versus Self-Editing on Students’ Revision of Language Errors in Revised Drafts” and published in System: An International Journal of Educational Technology and Applied Linguistics, Nuwar Mawlawi Diab, a professor in the Department of Humanities at Lebanese American University in Lebanon, describes a study investigating differences in reduction of language errors in revisions after two different kinds of treatment, peer review and self-review. Results indicated that there was a statistically significant difference between the comparison and experimental groups in favor of the experimental group for rule-based language errors but no statistically-significant difference in terms of non rule-based errors. Rule-based errors were subject/verb agreement and pronoun agreement; non-rule-based errors were wrong word choice and awkward sentence structure. Diab’s research design included a comparison group and an experimental group in first and second-year ESL classes taught by the researcher. Lebanese was the native language of students in both groups, and English or French was their second language. Instruments of data collection were a questionnaire, a diagnostic essay, an editing form, and a formula to calculate language errors. The questionnaire established that the number of years that the students had studied English was similar in both groups, though the experimental group had had less exposure to writing in terms of studying grammar, learning organization patterns, engaging in peer-editing, reading and imitating examples of famous writers, and reading books about writing. There were also differences in attitudes toward editing, though both groups seemed to believe in their peers’ ability to comment on their essays. Both groups also agreed that students are able to edit their colleagues’ essays when trained to do so. The diagnostic essay showed some difference in students’ language ability in favor of the experimental group. Moreover, Diab concludes that the experimental group’s ability to reduce rule-based errors seemed to be the result of peer interaction when editing each others’ essays and that peer editing can be used to improve students’ language ability by reducing their frequency of errors.

Two other essays focusing on face-to-face peer evaluation in ESL classes that I examined provided even less evidence of its value. They document resistance to using peer editing in ESL classrooms because of deeply-entrenched traditional educational systems. A third reports results of a study that demonstrated that peer review was less effective than providing students models of good writing. Kelvin KaYu Chong in “Investigating the Perception of Student Teachers in Hong Kong Towards Peer-editing” attempts to explain why a number of student teachers in Hong Kong are reluctant to use peer-editing. Chong himself is clearly in favor of the approach. He examined both teachers’ personalities and external constraints from local social situations that contributed to the resistance and reluctance of teachers implementing peer editing in Hong Kong’s educational institutions. Chong concludes that the particular approach to peer editing that teachers take should be adjusted to suit the particular circumstances of their teaching situations and endorses a process-oriented approach because it focuses on the growth of the writer.

Milica Savic in “Peer Assessment—A Path to Student Autonomy” also focuses on the challenges of moving from a traditional educational system, this time in Serbia, to a more progressive process-oriented one. Savic says, “Although peer assessment appears to have become ‘a world-wide phonomenon’ (Falchikov & Goldfinch 2000: 287), employed in different educational settings around the world, judging by the growing body of literature on the subject, there still seem to be very few practitioners of this mode of assessment in the Serbian higher education” (254). Savic attributes the resistance to the widespread belief that only teachers are reliable assessors of student writing (254). She reports that some students do not feel their peers are qualified or objective enough to provide them useful feedback (254). Savic reports on a study she conducted focusing on English language students’ ability to provide qualitative and quantative feedback on their peers’ argumentative essays and explores their attitudes to peer assessment. She found that only 50% of her participants identified all major problematic areas while others had difficulty doing so. Easiest for the students to identify were argument development, proper sentence and paragraph linking, and sentence structure. Correcting grammatical errors, however, presented difficulties. A number of students corrected already correct sentences or failed to notice errors in incorrect ones (258). Students also had difficulty detecting inappropriate vocabulary. They were better able to provide suggestions on how to make paragraphs more coherent or how to further an argument, though some students had comments on how to improve coherence in paragraphs the instructor considered coherent and well-developed (259). In her discussion of her results, Savic also observes that only 60% of the participants made well-balanced comments that included both positive and negative comments; the rest focused exclusively on weaknesses and failed to emphasize the strengths of the essays (259). Also, the instructors and the students’ grades did not show a satisfactory degree of agreement (259). In addition, students did not think they benefitted from their colleagues’ feedback. Only 33% answered in the affirmative when asked if they wanted to have their work assessed by peers more often (260). Some students mentioned “mean” comments. Others doubted that their colleagues could improve their writing. Savic recommends open disclosure and discussion of evaluation criteria with students before peer evaluation and perhaps excluding quantitative peer assessment and providing students only qualitative feedback (263). She also recommends that teachers attend to the importance of the social component of peer assessment and that they be asked to make a positive comment at the start of all feedback (263).

The final article of the three that focus on problems associated with peer critiquing suggests that providing students with models of successful writing is more effective than peer feedback in improving student writing. In “Comparing Native Models and Peer Feedback in Promoting Noticing through Written Output,” Sasan Baleghizadeh and Fatemeh Arab of Shahid Beheshti University in Tehran, Iran, report that “the native model group outperformed the peer feedback group in filling their linguistic holes and gaps. Moreover, the effect of native models on participants’ retention and subsequent learning was higher than for the peer feedback group” (63). Baleghizadeh and Arab used a “pretest-posttest quasi-experimental design” (66). Students from two different classes were randomly assigned to two groups with two different treatments, exposure to native models or providing feedback. The authors state, “It is quite clear that peer feedback was not very helpful in promoting awareness among the participants” (72). They conclude that since the participants were at the intermediate level they did not have adequate knowledge to provide useful feedback (75).

Peer Evaluation within Computer-Mediated Second Language Contexts

If the Ferris book cautions against the use of peer evaluation in ESL contexts, and the articles described above do little to demonstrate its effectiveness, it would seem that using peer evaluation in computer-mediated ESL contexts might be more promising. Three articles published in Self, Peer, and Group Assessment in E-Learning edited by Tim S. Roberts of Central Queensland University in Bundaberg, Australia (2006) are endorsements of the approach. Roberts’ preface indicates that the essays in the collection describe ways in which technology can be usefully employed in online environments so all of the essays in the collection are reports of successful use of e-environments in ESL contexts. Three essays specifically address peer assessment. Pamela L. Anderson-Mejίas in “A Case Study in Peer Evaluation for Second language Teachers in Training” concludes that “Peer evaluation is an essential part of true collaboration and its use is ever increasing within ‘real world’ context such as the school setting, business, and perhaps industry. Thus, it must be part of the university training experience” (43). Anne Dragemark of Gӧteborg University, Sweden, presents research findings in the area of self assessment obtained from the European Leonardo Project: Learning English for Technical Purposes (LENTEC) carried out in 2001-2003. She reports that shifting some of the responsibility for assessment from teachers to students was successful (169). Bernarda Kosel of the University of Ljubljana, Slovenia, in “Self and Peer Assessment in a Problem-Based Learning Environment: Learning English by Solving a Technical Problem—a Case Study” concludes that peer assessment in e-learning environments is valuable because it: 1) reduces the teacher’s burden of grading papers; 2) changes both the teacher’s and the students’ perspective of the learning process; 3) created a feeling that students were members of an online community with a shared interest (206).

Two of the three studies of peer evaluation within computer-mediated ESL contexts published in 2010 and 2011 concurred with the conclusions in the essays in the Roberts collection, reporting generally positive results. The third study is less conclusive. Osama Sayed of King Khalid University in Saudi Arabia in “Developing Business Management Students’ Persuasive Writing through Blog-based Peer-Feedback” investigated the effects of using blog-based peer feedback on the persuasive writing of EFL business management students at the community college in Bisha, King Khalid University, Saudi Arabia. Sayed used a pre-test/post-test experimental and control group design. Results of the analysis of the differences between means of scores of the study subjects in the pre-post-measurements revealed a significant improvement in the experimental group students’ persuasive writing (54). Sayed found that the blogs reduced social-context clues such as gender, race, and status, and nonverbal cues such as facial expressions and body language. Sayed says, “computer-mediated communication (CMC) provides a safer and a more relaxed environment for language learners” (55). Students preferred instructor feedback, but the quality of their postings was maintained through the use of online peer feedback. Sayed also found that “students participating in anonymous e-peer review performed better on the writing performance task and provided more critical feedback to their peers than did students participating in the identifiable e-peer review” (57). Arabic and Islamic culture is given as an explanation: it is inappropriate in these cultures to tell people face-to-face about their mistakes because doing so is considered aggressive and impolite (61). One of the implications of the study, according to Sayed, is that “In conservative societies, where strict gender segregation is enforced and where girls and boys are separated in school, blog-based peer feedback could be an effective tool for the mutual benefit of the two genders and for providing a forum, not only for the development of composition writing, but also for social interaction and negotiation of meaning” (62).

A second essay focusing on computer-mediated peer review in ESL also demonstrated that the approach was successful. In “Computer-Mediated Corrective Feedback and language Accuracy in Telecollaborative Exchanges,” Margarita Vinagre of the University of Madrid and Beatriz Munoz of the University of Applied Sciences Emden/Leer report on a three-month project exploring the impact of peer feedback on the development of learner accuracy. They organized an e-mail exchange between seventeen post-secondary learners of Spanish and German. In their study, Vinagre and Munoz identify three types of corrective feedback: a) feedback—identifying errors but leaving it to the learner to make the correction; b) correction—providing treatment of information that leads to revision; and c) remediation—providing learners with information that allows them to revise or reject the wrong rule they are using, thereby inducing them to revise their mental representation of the rule and avoid recurrence of the error (74). They found that learners re-used the suggested corrections (i.e., “recycled” them) 15% of the time after corrections were made and 85% of the time after remediation was recommended. Feedback alone, as they define it, had no effect.

A third study, “Dynamic Motives in ESL Computer-Mediated Peer Response” by Li Jin of DePaul University and Wei Zhu of the University of South Florida published in Computers and Composition, is ostensibly a demonstration of the value of peer review using instant messaging in ESL classes. The study raises questions, however, that it does not adequately address. Jin and Zhu found that “ESL students were driven by heterogeneous and multiple motives even when they were participating in the same task, and their engagement in multiple motives was dynamic rather than static.” The authors summarize previous research that undergirds their study, most importantly the finding that second-language learners participate more actively in group work that takes place in electronic environments. The authors use activity theory to enable their analyses of their research subjects’ motives but observe that the participants’ motives were not linear, simple, or straightforward but, rather, a “recursive, complex, and interpretive process.” The case study focuses on two ESL students who were partners in the Computer-Mediated Peer Response (CMPR) tasks. A serious problem they uncover in analyzing their data, however, and one that they not adequately address in the discussion of their findings, is that one of the partners did not concentrate on her online interaction. While chatting with her partner, for instance, she opened three more IM chat windows and browsed several music web sites throughout the lab time. She also started to play an online poker game with one of her friends and rejected all of the opinions and suggestions that her partner gave about her essay and expressed her resentment about his comments. In addition, she did not provide any comments or suggestions on her partner’s essay. Her partner, in contrast, was motivated to develop his writing skills by collaborating with his partner. The authors might have concluded that the collaboration was only partially successful in that only one of the partners took it seriously. Instead, they state: “the use of IM not only triggered the formation of new motives within and across learning tasks but also afforded flexible motive shift among the activity systems within and across tasks.” They do acknowledge that technology can be a distraction for students and that teachers need to make sure that students have adequate technological skills so that they can perform CMPR tasks successfully.

Peer Evaluation in Computer-Mediated English as a First Language Contexts

Like the work on using peer evaluation in computer-mediated ELS classrooms, work focusing on using technology in peer review in non-ESL classrooms is generally appreciative. Michelle Trim’s textbook What Every Student Should Know About Practicing Peer Review is addressed to students rather than teachers and discusses both traditional peer review and electronic peer review. In the section on traditional peer review, she discusses advantages and disadvantages of peer review of early drafts and peer review of developed drafts. It appears that Trim sees more advantages than disadvantages of using computer-mediated peer review, though she does aim for a balanced discussion. In describing the advantages and disadvantages of peer review of early drafts, for instance, she acknowledges that getting feedback in person gives students an opportunity to ask questions and provide explanations of their comments at the time of review (9). She points out, though, that written marginal comments can be difficult to read and absorb. Other disadvantages are that limiting review to one class period can be rushed, commenting and responding by hand takes time and makes changing/revising comments difficult to do neatly, and handwriting can be difficult to interpret and takes a large amount of space on the page (9). In her discussion of the advantages and disadvantages of peer review using word processing software, she begins with advantages rather than disadvantages and focuses on the greater legibility of text when using a word processing program, the capability of such programs to provide students with multiple copies of comments, and the ability to use the Track changes software of Microsoft Word (24). She does acknowledge, though, that computer files can be corrupted or accidentally deleted, and students have to coordinate their exchange of word-processor files to complete the review. Also, students may feel intimidated by the technological knowledge required to transmit, alter, save, and return an electronic file (25).

A collection published by the Modern Language Association, Teaching Literature and Language Online includes Dawn F. Formo and Kimberly Robinson Neary’s “Constructing Community: Online Response Groups in Literature and Writing Classrooms,” a description of how they used online response groups (ORGs) in their classes. Their essay is an unqualified endorsement of the approach. They say, “In our study of these groups, we concluded that ORG pedagogy engenders engaged writing communities dedicated to useful response” (147). They continue, “The ORG pedagogy we present here through a case study provides students with a productive language for response that minimizes students’ frustration with specious comments. Even more, this type of focused digital response helps students recognize the value of revision and develop the skills required to provide and receive useful digital feedback” (148). Formo and Neary conclude with the hope that their readers will experiment with online response groups (157).

Michael Fosmire’s “Calibrated Peer Review: A New Tool for Integrating Information Literacy Skills in writing-Intensive Large Classroom Settings” reports the strengths and limitations of software that enables peer review in large classes and reduces the workload of the faculty. A librarian, Fosmire collaborated with two other colleagues in a large lecture course taught in the Purdue University College of Science in 2007, “Great Issues in Science and Society.” The calibrated peer review (CPR) program, he says, “automates the entire process of submitting, distributing, and compiling grades for an assignment” (148). Described more fully in an article by Orville Chapman and Michael Fiore and one by Ralph Robinson, CPR consists of the following steps:

  • Students are given a writing assignment, often based on a reading slected by the instructor.
  • Students compose and submit their essay by a certain deadline to the CPR software server.
  • The CPR system then provides students with three instructor-created “calibration” essays to grade according to a provided rubric.
  • After “passing” the calibration essays, to ensure they understand the grading criteria sufficiently, students receive three of their peers’ essays to grade against the same rubric.
  • Students then evaluate their own essays, and those scores are compared to a weighted average of peer evaluations to determine if the students accurately evaluated their own work.
  • Instructors may review any essays and change scores when students feel they were unfairly graded or when the instructor notices potentially anomalous grades. (148).

Fosmire, in addition to providing his own assessment of CPR, reviews literature that evaluates it, most of which finds it to be a useful tool (150). He does observe, however, that Reynolds and Moskovitz concluded that “many assignments do not probe higher level skills but rather are too concrete and fact based” (150) and that “students have a tendency to overestimate their achievement” (157). He also acknowledges in the study he is reporting on that students who had used CPR tended to have a negative attitude toward it because they preferred their instructors to grade their papers rather than their classmates (155-6) and that they spent too much effort reviewing with CPR (156). He also observes that “the CPR system was a good vehicle for learning content but less effective at improving writing skills” (158). He nevertheless reports that students’ peer reviews were generally more accurate than their self-evaluations (156), students seemed to engage more with the subject matter of these CPR assignments (156), and “the CPR system was a good vehicle for learning content but less effective at improving writing skills (mainly because students did not believe their peers were providing good feedback)” (158). Fosmire concludes that CPR “shows substantial promise in support of integrating information literacy competencies in a writing intensive, large-classroom environment” (158).

Conclusion

Changing the context within which peer evaluation takes place changes how it is conducted, who takes part in it, and why it is used. Peer evaluation in ESL contexts often involves identification of errors, for instance, whereas this is not usually a primary concern in process-oriented composition studies. Peer evaluation designed to have peers contribute to classmates’ grades is very different from having them provide feedback on an early draft so that classmates can improve their paper. Having students use instant messaging or software that enables reviewing is very different from having them fill out critique sheets in class and explain their suggestions face to face. Having students whose native language is German provide feedback on writing in German by students in Spain and vice versa is very different from having students whose native language is English respond to writing in English. The task becomes more complex. The stakes become higher.

I wrote “Students as Readers of Their Classmates’ Writing” at a time when process-oriented composition studies was replacing traditional error-oriented pedagogies. It was necessary for researchers to demonstrate its effectiveness, and the work that I cite in the piece promotes it. My piece is not promotional, however, but critical. I point out that without training, students may not be able to provide useful feedback. The same shift has occurred within ESL research on peer review—it was initially favorable but over time has become critical. I think we are still in the promotional phase of work on computer-assisted peer review, but the seeds of a critical perspective are contained within some of the work I discuss. Students can misuse instant messaging, and students prefer being graded by faculty rather than classmates. I look forward to seeing the critical phase of peer review in computer-mediated settings emerge.

Works Cited

Anderson-Mejίas, Pamela L. “A Case Study in Peer Evaluation for Second language Teachers in Training.” Self, Peer, and Group Assessment in E-Learning. Ed. Tim S. Roberts. Hershey: Information Science Publishing, 2006. 17-63.

Baleghizadeh, Sasan and Fatemeh Arab. “Comparing Native Models and Peer Feedback in Promoting Noticing through Written Output.” Innovation in Language Learning and Teaching 5.1 (2011): 63-79.

Bruffee, Kenneth. “Collaborative Learning: Some Practical Models. College English 34 (February 1979): 634-43.

Chapman, Orville L. and Michael A. Fiore. “Calibrated Peer Review.” Journal of Interactive Instruction Development 12.3 (200): 11-5.

Chong, YaYu. “Investigating the Perception of Student Teachers in Hong Kong Towards Peer-editing.” English Language Teaching 3.1 (March 2010): 53-9.

Dragemark, Anne. “Learning English for Technical Purposes: The LENTEC Project.” Self, Peer, and Group Assessment in E-Learning. Ed. Tim Roberts. Hershey, PA: Information Science Publishing, 2006. 169-190.

Elbow, Peter. Writing Without Teachers. New York: Oxford UP, 1981.

Falchikov, N.& J. Goldfinch. “Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks.” Review of Educational Research 70.3 (2000): 287-322.

Ferris, Dana R. Response to Student Writing: Implications for Second Language Students. Mahwah: Erlebaum, 2003.

Flynn, Elizabeth A. “Students as Readers of Their Classmates’ Writing: Some Implications for Peer Critiquing.” The Writing Instructor (Spring, 1984): 126-28.

Formo, Dawn M. and Kimberley Robinson Neary. “Constructing Community: Online Response Groups in Literature and Writing Classrooms.” Teaching Literature and Language Online. Ed. Ian Lancashire. New York: Modern Language Association, 2009. 147-64.

Fosmire, Michael. “Calibrated Peer Review: A New Tool for Integrating Information Literacy Skills in Writing-Intensive Large Classroom Settings.” Libraries and the Academy 10.2 (April 2010): 147-63.

Hawkins, Thom. Group Inquiry Techniques for Teaching Writing. Urbana: ERIC/NCTE, 1976.

Jin, Li and Wei Zhu, “Dynamic Motives in ESL Computer-Mediated Peer Response.” Computers and Composition 27 (December 2010): 284-303.

Kosel, Bernarda. “Self and Peer Assessment in a Problem-Based Learning Environment: Learning English by Solving a Technical Problem—A Case Study.” Self, Peer, and Group Assessment in E-Learning. Ed. Tim Roberts. Hershey, PA: Information Science Publishing, 2006. 191-209.

Moffett, James. Teaching the University of Discourse. Boston: Houghton Mifflin, 1968.

Robinson, Ralph. “An Application to Increase Student Reading and Writing Skills.” The American Biology Teacher 63.7 (2001): 474-80.

Savic, Milica. “Peer Assessment—A Path to Student Autonomy.” New Approaches to Assessing Language and (Inter-)Cultural Competences in Higher Education. Eds. Fred Dervin and Suomela-Salmi. EijaFrankfurt: Peter Lang, 2010. 253-66.

Sayed, Osama H. “Developing Business Management Students’ Persuasive Writing Through Blog-based Peer-Feedback.” English Language Teaching 3.3 (September 2010): 54-66.

Slembrouch, Jane Van. “Watch and Learn: Peer Evaluation and Tutoring Pedagogy.” Praxis: A Writing Center Journal. Vol. 8.1 (Fall 2010). http://projects.uwc.utexas.edu/praxis/?q=node/340.

Smith, Frank. Understanding Reading. New York: Holt, Rinehart, & Winston, 1978.

Trim, Michelle. What Every Student Should Know About Practicing Peer Review. New York: Pearson/Longman, 2007.

Vinagre, Margarita, and Beatriz Munoz. “Computer-Mediated Corrective Feedback and Language Accuracy in Telecollaborative Exchanges.” Language Learning & Technology 15.1 (February 2011): 72-103.

Provenance: 

This text was accepted for publication after an anonymous peer review process.

Publication date: 
2011-12
Rating: 
0
No votes yet