My favorite part of teaching rhetoric and composition is giving feedback to student writing. I genuinely enjoy reading college papers, despite what some in my field have to say about it on social media. Giving feedback on student writing is engaging in a kind of dialogue with students. This dialogic exchange is where I push against and open up their ideas. But it’s also where students enlighten me and teach me more about the limits of writing and the power of the humanities. It’s a fascinating process. Influenced particularly by the many (many) debates about tensions between teaching writing as grammar/correctness and writing’s larger, more experiential features, I’ve developed a particular kind of style that works for me in how I provide feedback. Taking a cue from Chris Anson, who recommends maintaining a reflective and open balance between focusing on error and focusing on content (“Response and the Social Construction of Error”), I developed a color-coded system of feedback that works for me: I highlight in yellow anything that is substantive, moving, original, thought-provoking, or argumentative. In my syllabus and in my interactions with students, I talk about the use of yellow to designate passages that are particularly insightful, or, as Peter Elbow might say, aspects of the paper I genuinely like. In green, I highlight egregious “errors” in spelling, punctuation, syntax, etc. I do this so as not to explicitly correct student papers—since most viable research in written communication shows that explicit grammar instruction provides little, if any benefit to student writing (Lunsford and Connors; Hillocks; Braddock, Lloyd-Jones, and Schoer, and on, and on, and on). The lower-order concerns like grammar often are revised in subsequent drafts anyways, or, as Elbow proposed, sometimes I designate the polishing of grammar for an assignment completely separate from the content-creation itself. Marginal comments and more summative comments at the end are inquisitive, often playful, and sometimes argumentative. In this way, students can use the feedback productively to revise and open up their work. By and large most of my students are happy with this simple, yet thorough approach to feedback.
But then I have to put a grade on it. Grading is probably the most stressful part of my work as an academic. I have what you might call “grade anxiety.” I might obsess about grades almost as much as my students. In many ways, and I’m not alone here, I believe that the very notion of grades undermines the mission of writing instruction. The fact that I have a professional and contractual obligation to provide a grade, however, necessitates that I implement a fair, ethical, and consistent grading policy. Therein lies the tension so many of us face. To mitigate this, I’ve tried grading a few different ways over the years. When I taught first-year composition, cognizant of my objections to grades, I decided transparency was key. I placed every rubric and every scoring guide and ample evidence for my policies on my course website and syllabus. This sheer volume of documents, despite my efforts of transparency, though, might have confused students more than it illuminated. Later, persuaded by the work going on in holistic and portfolio grading, I implemented a holistic scheme in my intermediate composition course, where students wrote many pieces that went into three “units” or “portfolios,” with each receiving “unit grades” instead of individual grades on each piece of writing. Writing is not something you can quantify in increments, I told them. Unlike chemistry or math, this is writing, the unquantifiable art-form fraught with ideology and contingency. Writing cannot be quantified at every conceivable moment in the course! Discussing this openly in class, I found that students are generally receptive to this argument. In my evaluations, though, quite a few students comment that while they understood why I graded writing that way, they still preferred the other way. The perils of “not knowing a number,” despite having ample feedback was often too great for them. And I don’t blame them either. In today’s corporate campus where a college degree is an exchange between customer and consumer for a credential to enter the workforce, the language of quantifiable, positivistic assessment is more preferable to the language of local, site-based learning.
Nevertheless, I am continually tweaking my grading philosophy in an effort to make it fair, ethical, and in line with my institutional restraints. So far, I am intrigued by several approaches out there. For example, Cathy Davidson offered a provocative way to introduce “contract grading” in her class where students signed up for As, Bs, or Cs, and “satisfactory” completion of the contract guarantees the grade. How does one determine satisfactory? Davidson argues for “Crowdsourcing”: i.e., students rank one another’s work:
So, this year, when I teach “This Is Your Brain on the Internet,” I’m trying out a new point system supplemented, first, by peer review and by my own constant commentary (written and oral) on student progress, goals, ambitions, and contributions. Grading itself will be by contract: Do all the work (and there is a lot of work), and you get an A. Don’t need an A? Don’t have time to do all the work? No problem. You can aim for and earn a B. There will be a chart. You do the assignment satisfactorily, you get the points. Add up the points, there’s your grade. Clearcut. No guesswork. No second-guessing ‘what the prof wants.’ No gaming the system. Clearcut. Student is responsible.
But what determines meeting the standard required in this point system? What does it mean to do work “satisfactorily”? And how to judge quality, you ask? Crowdsourcing. Since I already have structured my seminar (it worked brilliantly last year) so that two students lead us in every class, they can now also read all the class blogs (as they used to) and pass judgment on whether the blogs posted by their fellow students are satisfactory. Thumbs up, thumbs down. If not, any student who wishes can revise. If you revise, you get the credit. End of story. Or, if you are too busy and want to skip it, no problem. It just means you’ll have fewer ticks on the chart and will probably get the lower grade. No whining. It’s clearcut and everyone knows the system from day one. (btw, every study of peer review among students shows that students perform at a higher level, and with more care, when they know they are being evaluated by their peers than when they know only the teacher and the TA will be grading).
I’ve also been reading a lot about assessment theory lately, and I’m intrigued by scholars like Bob Broad, who make the case against rubrics entirely. Despite our postmodern inclinations, rubrics, he writes, “the most visible and ubiquitous tool of writing assessment—arguably the aspect of rhetoric/composition that impinges most powerfully and memorably on our students’ lives—teach our students an exactly opposite view of knowledge, judgment, and value” (What We Really Value: Beyond Rubrics in Teaching and Assessing Writing 4). Broad offers a more interpretive approach he calls “Dynamic Criteria Mapping,” a dialogic process in which the class collaborates to create a “map” of what we collectively “value” in writing, and adhering to that guide when evaluating student writing. This process adheres to many suggestions Brian Huot makes in his groundbreaking book (Re)Articulating Writing Assessment, which is that writing teachers need to make better empirical claims about the validity of writing assessment, and moreover that writing assessment necessarily should be “site-based” and “locally controlled.”
Whether through contract grading or Dynamic Criteria Mapping, I’m always looking for innovative ways to assess student work. It’s important to me in the classroom, but it also has larger implications. Currently I’m revising a dissertation chapter on the English Language Arts standards implemented by the Common Core State Standards Initiative, a large-scale movement across the country to standardize writing outcomes in grades K-12. Part of what Gallagher has called the “Accountability Agenda,” the CCSSI seeks to frame writing outcomes around what Yancey has called “standardization, not standards.” This large-scale homogenization of writing outcomes flies in the face of what recent, cutting edge research on writing suggests, which means now, more than ever, writing instructors need to develop stronger validity claims about their assessments. Part of my argument in the chapter is that this also means we need to develop better strategies as an academic discipline to engage in public debates. But that shall be the topic of another blog.