After defending my dissertation, graduating from TCU, packing my belongings, moving back home to Oklahoma for a couple months, and then driving 21 hours across the country with wife, mother-in-law, and elderly cat in tow (and mother-in-law now having flown back to Oklahoma, tissues in hand), I am finally settled in beautiful Santa Barbara and I am ready to get back to work.
But I did manage to get several things accomplished this summer despite the craziness of moving and driving.
- I submitted final revisions for the chapter on community engagement and service learning in the humanities for Cambridge Press. We are hoping the Handbook for Community Engagement and Service Learning, the first of its kind, comes out in 2016. It should be a vital contribution to engagement scholarship across disciplines.
- My team conducting the program assessment finally wrote up our report and submitted it to a top journal in our field. It’s currently under review and I have high hopes for its placement.
- Everyone on the CCCC panel on Linda Brodkey revised their presentations and we combined them into a single essay, thanks in large part to the editorial acumen of David Bleich, and we have submitted it to another top journal in our field. Again, high hopes there and I’m extremely happy we were able to get everything organized and in print.
- I responded to a CFP on MOOCs, where I hope to turn my MOOC chapter into an argument for better problematic partnerships in the field of writing studies.
- I couldn’t help but submit to the CFP from Cheryl Ball and Drew Loewe on some “Bad Ideas about Writing.”
Other items on the agenda this year: Publish my Common Core material from the dissertation, and work on a larger framework for a book project. Yeah, the B-word. My goal is that all the writing and teaching in store over the next few months will work as an incubator for a broader understanding of these “problematic partnerships,” which, as I’m starting to see, and what I’ll eventually argue, is a huge part of the work we do and have been doing in the field for some time.
But, not to get too wrapped up in research. I also have been assigned two courses this fall at UCSB: Business Writing and Writing II. My new director casually suggested while chatting in my office the other day that there’s this really neat website where you can make infographics–and then 5 hours later, I emerged with two funky syllabi that I plan on introducing on the first day of class.
Below are images of my infographic syllabi for Writing II and Business Writing. I’ve provided links to web versions here and here, and I’ll also make both PDF and Microsoft Word version available, as the folks at WebAIM recommend in their “Principles for Accessible Design.” Much of the inspiration for these syllabi also came from Dr. Julie Platt’s infographic syllabus for a Technical Writing course at the University of Arkansas at Monticello. Hers is much better, obviously, but I tried! The main thing here is that in a general sense I want to reimagine a tired old genre of the syllabus. It’s boring, and frequently serves mostly as outlining the “mundane, bureaucratic requirements of the University” as Adam Heidebrink-Bruno writes in a Hybrid Pedagogy article. In an attempt to bridge the gap between teacher and student, and also to make a document that more students might actually read, I attempted the infographic genre as a place to show both my attempts at understanding and my compassion for students needs and realities. Also, I think it might be a good entry into genre discussions. What are the genres and conventions of an infographic? How do genres determine what is valued (and in turn, not valued) in our writing? Also, it was a lot of fun.
Two things to note after scanning the syllabi: one, I used the same template for both. I should’ve been a bit more creative, but hey, this is my first time experimenting here. Also of note is my word on writing assessment. As I’ve mentioned elsewhere, writing assessment is something I think and stress a lot about. I experiment with different forms of it every semester. Linda Adler-Kassner once wrote in one of her syllabi that grades, “while evil, are a necessary form of feedback.” And I agree, to a certain extent, but I am also interested in exploring ways to make grades less evil. So this quarter I’m going to attempt to collaborate on rubrics with the students in my class. I’m borrowing heavily here from Bob Broad, who introduced something called “Dynamic Criteria Mapping” (DCM) in his book What we Really Value: Beyond Rubrics in Teaching and Assessing Writing. Basically, DCM is a tool to develop collaborative, dynamic, and empirical data on student writing together as a class. The process goes something like this: we read samples of student papers with my feedback together, we talk openly about that feedback and develop, as a group, some “honest, rich, and specific criteria for evaluation.” The UMASS Amherst Writing Program has published several resources about DCM as an inductive and collaborative way to develop assessment criteria. One of the benefits, as Broad suggests, in addition to establishing clear, empirical, and transparent grading criteria, is that it strengthens the links between what we tell students and the public what we really do (122). And, as we know, even though teachers generally have really thoughtful ways of evaluating student writing, often students still are “bewildered by our evaluation practices; even when they accept our commitment to process and follow conscientiously our instructions to revise, it’s clear that they’re not always sure what all that process is for, where it’s supposed to lead, [and] why they should keep revising besides the fact that we’ve asked them to” (Fleming et al). So with DCM, I’m hoping to mitigate that a bit by working together with my students to create a map of what we value in writing, and to evaluate each assignment according to that map. Broad, in addition to discussing several in-depth cases in his book (and most important to me, I think, will be the “contextual criteria” maps), has also published in depth instructions for DCM. First, is to collect data. I’ll give feedback on a range of writing assignments and then as a class we’ll go through that feedback to prepare a list of qualities, features, and aspects of their writing that seem valuable. Next, we’ll analyze the list and create some kind of visual representation of those values as clusters of related values. And then we’ll publish that map and I’ll work it into my process of evaluating their writing. I’ve never done it before, so I’ll report back with any challenges and pitfalls, but I’m excited to incorporate new and innovative ways to mitigate the so-called “evil” of assessing writing.
I welcome feedback on either the design or content of these syllabi and your thoughts on DCM as a way to evaluate classroom writing.