RSS

Author Archives: DB

About DB

PhD student in English at a large university in the American South. Married. Late 30's. One toddler daughter, one teenage biological son. Amateur musician and cook. Trying not to drink.

Haswell, Richard. “Drudges, Black Boxes, and Dei Ex Machina.”

Position on AES: 6. Rejects machine scoring for assessment, but not for placement, and suggests developing our own (as a field) software.

Haswell admits that we, as a field, are complicit in the popularization of machine scoring, since we’ve positioned writing assessment as “drudgery” (59). He provides a partial history of computer scoring. He argues that machine scoring software represents a “black box” (we don’t always know how it works), but that it mimics some of our own practices as teachers (73). He claims that we should come up with ways not only to critique machine scoring, but to resist it (76). But he concedes that machine scoring might be useful in placement (77).

Assumptions:

  • Assumes that using software for placement is less damaging than using it for assessment.
  • Assumes that we, as a field, are complicit in the creation and popularization of machine scoring software.

Points of interest:

  • Haswell suggests that we (as a field) write our own software (76). Maker culture!
  • He wants administrations to bear the burden of proof for adopting new software (76).
 
1 Comment

Posted by on November 14, 2013 in Uncategorized

 

Condon, William. “Large Scale Assessment, Locally Developed Measures…”

Position on AES: 1. Is completely against AES/automated scoring in “admissions, placement, formation, or achievement testing” (par. 1).

Condon critiques AES/automated scoring on the grounds that these programs cannot do justice to the “writing construct,” and focus instead on surface-level features. Additionally, the sample sizes required by most programs are not realistic examples of student writing, and even the best software is only able, statistically, to poorly mimic the abilities of a human rater. Condon places machine scoring on the lowest point of the progression “ranking, assessing, evaluating.”

  • He assumes that the use or rejection of machine scoring is a zero-sum game: there should be no compromises, and the software is not really useful in any context.
  • He assumes that the use of machine scoring cannot support good teaching of writing.

He provides some interesting information comparing machine scoring software to human raters (par. 9). He also supplies us with a helpful chart to understand the levels of assessment effectiveness and granularity (Fig. 1).

 
Leave a comment

Posted by on November 12, 2013 in Uncategorized

 

21st Century Literacies for Rhet/Comp Grad Students

What?

When?

Where?

Work in a word processor (Word, Pages, etc) Usually expected to know these things prior to matriculation, or at least very early in program. No formal training provided. Learn from classmates, etc.

 

Be able to share and collaborate online using tools (blogs, Google docs, Doodle) Very early in program. No formal training, learn from classmates, etc
Create presentations using a software like Prezi or PowerPoint Very early in program. No formal training, learn from classmates or occasional training in digital studio
Demonstrate baseline ability to manipulate images By visual rhetoric at the latest. No formal training, learn from classmates or occasional training in digital studio
Visually arrange digital documents like e-portfolios We learn arrangement as a concept in rhetoric, convergence culture, etc.
Create a user-friendly interface in online environments (portfolios, blogs, etc) We learn about interface, navigation, etc in visual rhetoric, rhetoric, also in an optional portfolio class.
Understand the logics of various media, and how they work together in convergence Learn this on our own or from colleagues; also learn about various media and convergence in digital revolution and visual rhetoric.
Use a baseline understanding of interfaces to quickly learn various software platforms Learn this on our own or from colleagues; occasional training in digital studio

One of the trends we saw in this table is how procedure is not necessarily taught through coursework, but should be expected or learned in extracurricular contexts such as in the digital studio with other colleagues or by ourselves. The emphasis is more on the theories behind the literacies—meaning: instead of learning how to use a platform, we learn the concepts behind the use.

David and Joe

 
Leave a comment

Posted by on November 5, 2013 in Uncategorized

 

Portfolios as a Substitute for Proficiency Examinations (Elbow, Belanoff)

Portfolio evaluation, the authors argue in this short essay, is superior to single sample-based proficiency examinations, because portfolios honor pedagogy and allow the process model to work. Elbow and Belanoff argue that a good measure of student success in a writing course(s) is one that allows multiple, self-selected samples, along with formative feedback by the teacher. Additionally, they propose a model where portfolios are locally evaluated, using the student’s teacher along with another teacher who does not know the student, with the two educators deciding whether the portfolio achieves the “C” (pass) rating or not. If a portfolio fails because of one paper, the student is given the chance to revise it; if it fails as a whole, the student has to retake the writing course.

I do see a strength in this method, along with the ones the authors identify: it gives the student work an outside audience. Often, as Elbow and Belanoff note, teachers become kind of implicated in student writing, especially if a process model is followed. We’ve helped shape these papers, so we are, to some degree, implicated in the result. However, if another rater is introduced, it gives us a way to look at the product as well as the process.

 
Leave a comment

Posted by on October 29, 2013 in Uncategorized

 

Eports: Making the Passage from Academics to Workplace (D’Angelo, Maid)

This chapter explores the tension, in a technical communication graduate program, between academic expectations and “market” (practitioner) expectations when it comes to student outcomes. The focus, especially, is on technological proficiency–on tools–when it comes to graduation portfolios. The authors surveyed students and practitioners, and found that students wanted more training in technological tools, while there is some evidence that practitioners, especially managers, place higher value on higher-order skills. The authors themselves contend that tools are subordinate to theory, but acknowledge that the tension exists, and will probably continue to exist.

I well remember this tension as a professional communication graduate student at Clemson. At that time, I did not know Adobe CS at all, and I was in a visual rhetoric class with several students who had undergrad degrees in technical communication, and knew the software fairly well. As you know if you’ve used Photoshop or Illustrator (or Indesign or Dreamweaver), the learning curve can be fairly steep. I felt like I had been thrown into deep water, and I was puzzled by the resistance, on the part of my teachers, to teaching the software that they clearly expected us to use.

From an assessment perspective, this dovetails with my research interest in multimodality–what exactly are we assessing when we assess portfolios like this? The content? Familiarity with the tools? Some combination? And how, exactly, do we tell them apart, if that is even desirable?

 
2 Comments

Posted by on October 29, 2013 in Uncategorized

 

Testing 11 Year-Olds

Came across this, and thought it relevant: a teacher’s account of having to give a 100+ question indirect assessment to 11 year-olds.

 
1 Comment

Posted by on October 8, 2013 in Uncategorized

 

Assessing Industry-Standard Software? Or Multimodal Assignments?

Are students without a background in industry-standard design software rated more harshly in digital multimodal compositions?

I have wondered for a while whether digital multimodal composition assignments, from an assessment/response perspective, favor students who have a background in industry-standard design software like Adobe Photoshop or Adobe Illustrator. To measure this, I could modify the study conducted by Johnson and VanBrackle “Linguistic Discrimination in Writing Assessment.” Into a corpus of multimodal projects to be assessed, I could insert projects with typical design “errors” that result from ignorance of the software: pixelated images from improper resizing, etc. Then, using raters who are both knowledgeable and not knowledgeable about design, I could compare the texts with “mistakes” against texts without “mistakes.” I could even separate sample texts into three classes: expertly designed, designed with some software knowledge, and designed with no software knowledge.

 
5 Comments

Posted by on October 3, 2013 in Uncategorized