Lately I’ve been involved in a really neat collaboration with the Vice Provost of Institutional Effectiveness, the Interim Dean of the Honor’s College, and a fellow graduate student in the English department. This summer we’ve received a bit of grant money to tease through student surveys conducted in 7 undergraduate classrooms that interacted in some way with a visiting scholar on campus. This particular scholar was Kenyan novelist and activist Ngũgĩ wa Thiong’o. This guy is incredible. His books in Kenya about language, power, and colonialism, led to his arrest and imprisonment in the late 70s, and he’s been writing poems, plays, and novels in political exile in the US and UK ever since. He was also on the short list several years ago for the Pulitzer. At our university, he gave a large public reading of his latest memoir, and he also visited several classrooms that had read his work in advance, to discuss some of the themes and talk about his life. What a great opportunity for these students. Ngugi’s visit is tied to our institution’s Quality Enhancement Plan, which is an effort to foster what they’re calling “global learning” through several initiatives, one of which is bringing in guest scholars.
The question is, how do we assess whether or not students learned anything from his visit? That’s where we come in. We had instructors give their students a written survey about their experiences with Ngugi–either attending the public reading, classroom workshop, or both–and asked them to evaluate their learning on “issues of global importance.” As a researcher, I’m quite skeptical of “global citizenship” as a concept (Amy Wan’s “In the Name of Citizenship” in addition to her newest book Producing Good Citizens explain it more eloquently than I) but I am still curious to know what kinds of things students learn through direct engagement with scholars in which they’re reading. At any rate, we teased through the written responses of these courses and assigned them a score according to the QEP rubric which assesses students’ ability to “discuss critical questions about the impact of global issues on domestic and global communities.”
Our preliminary results, at face-value, seem predictable. Classes that engaged exclusively with Ngugi scored higher on the “global learning” scale than students who merely read his work in advance of the public lecture. But the really interesting discussion happened when we asked: “What kinds of learning is the rubric deflecting as much as it is reflecting?” This discussion yields a host of other discussions, taken up generally in a growing field called “Scholarship of Teaching and Learning.” Now, my colleague and I are going back through the responses, coding for evidence of learning not reflected in the QEP rubric. So far, I’m noticing some encouraging trends: many students exhibited a kind of embodied response to Ngugi, noting that it wasn’t until they saw him that they could “feel” what he was really getting at. Other students remarked at how communicating with him in person, seeing his reactions to questions, allowed them to understand his work. Many of these students exhibited high scores on the QEP rubric, but many of the students who exhibited these affective traits scored low. What is the connection between affect and learning? Affect is a growing topic of interest in rhetoric and composition circles, and I think its relationship to assessing writing via program evaluation is an interesting side of this multifaceted discussion.
The rest of the summer we’ll be reading, coding, writing, learning, and collaborating across departments and disciplines on this project. I like it because it is engaging multiple sites at the intersections of program assessment, writing assessment, public humanities, and global citizenship–areas of which I am both deeply critical and deeply invested.