So again, I’ve violated my one post a month policy. But on the bright side, the dissertation is done and the defense is scheduled for April. Pausing real quick to do a happy dance……
I also just got back from the 2015 Conference on College Composition and Communication in lovely Tampa, FL. My panel: “Twenty-Five Years since the ‘Troubles at Texas’: Linda Brodkey and the Risks of Writing Pedagogy” was a huge success. The panel reflected on the infamous writing course English 306: Writing about Difference, which never ran at UT Austin in the Summer of 1990 because it was censored by administrators after several dissenting faculty members took their grievances public in the form of a moral panic. I presented on a chapter from my dissertation which argues that Brodkey’s efforts that summer should not be seen as a “failure” to gain consensus but as a disruptive stance toward the status quo of writing and writing pedagogy. Libby Allison from Texas State gave a wonderful historical take on broader discrimination lawsuits underway in Texas leading up to and following the “Troubles at Texas;” David Bleich from the University of Rochester spoke passionately about the ways university censorship has proliferated throughout its institutional history; Mary Boland from the University of California San Bernardino spoke passionately about how the folklore of our field unfairly characterizes Brodkey as a hero when we should in fact see her work as the standard we should all strive toward in the field; and Shelli Fowler, former graduate student of Brodkey’s and current professor at Virginia Tech, gave a beautiful response reflecting on her time at Texas and the political tension we all face as writing instructors. The Q & A that followed was lively, and I couldn’t have asked for a better conference. I made a Storify document of all the live-tweeters during our panel that you can see here. Also, major kudos also to Paul Butler of the University of Houston who chaired the panel (and offered a lot of help organizing behind the scenes).
But as a conference-goer, as opposed to a presenter, the panels that inspired me the most were the ones that focused on writing assessment. It began with the 8am panel with Joshua Daniel-Wariya, Madeleine Sorapure, and Susan Delagrange called “What’s on the Screen: Innovative Approaches to Student Screencasting. Here, each panelist spoke about their pedagogical work using screencasting to help students reflect on their writing. Specifically, during the Q & A, someone asked Daniel-Wariya about assessment. To him, the reflective essay that typically follows a multimodal project just didn’t cut it anymore, and for him, he found success developing a rubric in conjunction with his students throughout the semester (which reminds me of some work by Chanon Adsanatham on this very topic). Later that day, though, graduate student at Florida State Joseph Cirio presented data from a study on developing site-based rubrics with students, and found that student-generated rubrics can actually be problematic because more often than not, students were unable to articulate what they actually valued about writing beyond limited metaphors of exchange (For example, I give you the paper, you give me back a grade, with the rubric functioning as a kind of exchange rate in the process). For Cirio, the rubric itself limits the kind of critical thinking we are trying to cultivate in the first place. Cirio’s results resonates with my own program review measuring the learning outcomes after a lecture from a visiting scholar. In my case, the scores from the rubric were lower than we expected, which led us to question the efficacy of rubrics in the first place. But Daniel-Wariya’s rubric was different, in that his rubric (the assignment was to make a screencast that explained a video game to an audience) was developed over time and through the study of genre. He had the class watch tons of videos in the genre he was assigning, and from explicit genre instruction,over time, the class collaboratively designed a rubric that outlined valuable traits in multimodal pieces. I imagine that in a locally controlled writing rubrics, collaboratively designed with students, acute attention to genre might be incredibly important. I noticed that Ann Ruggles Gere has published a little about this, but I’m actively searching for more treatments as I think about how I might design my own writing assessments in the future.
The best panel I saw was on a Theory of Ethics for Writing Assessment, in which Bob Broad from Illinois State University gave an eloquent, articulate, and biting critique of negative consequences attached to writing assessments, specifically in terms of for-profit testing companies like Education Testing Services. The respondent, a lobbyist for Educational Testing Services, gave a very perfunctory response to the critiques, suggesting that testing was not the enemy, but rather systemic poverty, and that local assessments are a slippery slope. He avoided the comments from Broad and Norbert Elliot of the New Jersey Institute of Technology who suggested that attaching negative consequences to assessments is unethical, which was unfortunate. With a smiling Les Perelman in the audience, there was something nice about watching experts in the fieldof writing assessment tell a lobbyist what they really feel, even if nothing productive emerges from the exercise.
These panels speak especially to my own project, in which we designed survey questions against a rubric from the American Association of University Professors to measure the learning outcomes after a visiting scholar taught in small groups and lectured to a larger public audience. Like I said above, together with our Office of Institutional Effectiveness, we found scores from the AAUP rubrics were surprisingly low. Now this could mean a few things: either the outcomes on the rubric were poorly chosen; or maybe the students didn’t learn as much as we had anticipated they would; or, perhaps there might be other mechanisms by which we can measure learning outcomes programmatically. My hunch is that it is a combination of the three, and our article presents the data and suggests, much like Broad and others, that rubrics may in fact screen out as much as they screen in. In my research for this blog, I found a brand new book by Ed White, Norbert Elliot, and Irvin Peckham called Very Like a Whale: The Assessment of Writing Programs. There are also books I need to reread, like Bob Broad’s What we Really Value: Beyond Rubrics in Teaching and Assessing Writing, as well as Brian Huot’s Rearticulating Writing Assessment for Teaching and Learning. Other books on my reading list as I prepare the lit review: Asao Inoue and Mya Poe’s Race and Writing Assessment as well as Michael Neal’s Writing Assessment and the Revolution in Digital Texts and Technologies.
At any rate, the panels on writing assessment at 4Cs this year have really helped me situate my current project in a literature that can help me form an analytical framework for thinking about program assessment. From my project, and from the field of writing assessment, the revelation here isn’t that rubrics aren’t useful, but that rubrics only give us a small piece of a much larger puzzle. I’m excited to work more on my post-dissertation project this spring. Thanks to my writing assessment colleagues across the country for helping me at the conference, and thanks to my writing pedagogy friends for helping make my own presentation memorable and exciting.