RSS

Photoshop: The Great Equalizer. (Joe Cirio and Jacob Craig)

03 Oct

Johnson and VanBraken focuses on linguistic error in print based writing—they found that there’s a bias among raters.  Raters were more lenient toward ESL writers’ error, but were more critical of AAE features. However, Jacob and Joe are curious about whether these findings reflect similar biases in digital and visual texts. So, we’re re-designing the study: we will be observing the potential differences and tensions in visual literacy across raters and students when the students are black, white, or international.  Hypotheses: Because international students have already a well established visual literacy—but based on a different spatial logic—these students will be rated lower than both white and black students for the same reason that AAE writers were rated lower in the Johnson and VanBracken’s study. Methods: Instead of a written text that raters will assess, we’ll have our raters look at a monument remix assignment—where monuments will be remixed visually by the student.  Gather instructors with experience in evaluating visual texts, and compare their evaluation.

Advertisements
 
4 Comments

Posted by on October 3, 2013 in Uncategorized

 

4 responses to “Photoshop: The Great Equalizer. (Joe Cirio and Jacob Craig)

  1. brucebowlesjr

    October 8, 2013 at 6:35 pm

    This seems like a solid assessment design.

    How might you define the differences between use of visuals for ESL, African American, and Caucasian students?

    You might want to explore if any research suggests whether or not African American and Caucasian students use visuals in a different manner.

     
  2. amypiotrowski

    October 9, 2013 at 5:01 pm

    Interesting question with really important implications for teachers. How are you going to define experience when selecting instructors who have experience evaluating visual texts – will they have taught for a certain number of years, or will they have taught certain courses in visual literacy? You may want to think about how raters might score the monument remix.

     
  3. jasonecuster

    October 10, 2013 at 12:50 pm

    Comment: I’m wondering how you’d select instructors for this evaluation, since I feel like depending on the instructor or kind of instructor, you’ll get a wide range of responses back from this kind of assessment. Someone coming from FSU may hold a completely different set of values in assessing a visual than someone from TCC or FAMU, so knowing/selecting based on experience evaluating visual texts might be really important for this kind of assessment.

    Question: How will instructors be advised to evaluate the students’ visuals? I assume you’ll need a pretty solid rubric of some kind, and something that mirrors the pieces we looked at in the past weeks to make direct comparisons, and of course, this means picking out what criteria you think are important and thereby leave other criteria out.

    Suggestion: I really do think the core of this particular research question is worth exploring and it makes good sense. I’d recommend thinking about the things I’ve mentioned above and thinking about pursuing the idea at some point, given how crucial understanding and evaluating visual/digital texts becomes every day in our field.

     
  4. sarahm1320

    October 11, 2013 at 7:25 pm

    I like the idea of translating concerns about linguistic bias into an investigation regarding possible visual/spatial biases. However, I am concerned that you are dividing the participant categories into two racial/ethnic groups and then one international group – the reason why J & B were able to do this for their study is because they actually represented three different kinds of “English” – they weren’t necessarily based on the race or nationality of the participants, but rather the perceived race or nationality (since some African Americans don’t write in AAVE, and some international students don’t have ESL errors).

    I think that before completing this study it would be advisable to identify some of the spatial markers of the categories of participants you are studying in order to justify breaking the participants into those particular groups OR you could turn it into a more investigative, descriptive study that is meant to determine if there actually are distinctive types of spatial arrangement (and what those distinctive types look like) based on race or nationality, and if so, where those lines could be drawn (i.e., do people who are native speakers of the same language share similar spatial orientations, or is it based on shared nationality, shared culture, etc.).

    This would be a very tricky study to do, but it could yield some very important results, particularly as the field of composition seems to be moving in a more visual, multi-modal direction.

     

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: