Johnson and VanBraken focuses on linguistic error in print based writing—they found that there’s a bias among raters. Raters were more lenient toward ESL writers’ error, but were more critical of AAE features. However, Jacob and Joe are curious about whether these findings reflect similar biases in digital and visual texts. So, we’re re-designing the study: we will be observing the potential differences and tensions in visual literacy across raters and students when the students are black, white, or international. Hypotheses: Because international students have already a well established visual literacy—but based on a different spatial logic—these students will be rated lower than both white and black students for the same reason that AAE writers were rated lower in the Johnson and VanBracken’s study. Methods: Instead of a written text that raters will assess, we’ll have our raters look at a monument remix assignment—where monuments will be remixed visually by the student. Gather instructors with experience in evaluating visual texts, and compare their evaluation.
Photoshop: The Great Equalizer. (Joe Cirio and Jacob Craig)