RSS

Monthly Archives: November 2013

Digital Writing Assessment and Evaluation

Since this book didn’t come out until after class started, I didn’t get a chance to include it in the syllabus, though it’s clearly the perfect focus for our class. So on this, our last day of content this semester, I want us to spend some time seeing what we missed. Each of us will skim a chapter in class and post a “comment” that responds to the following questions:

1. On what content does this chapter build to frame or situate the ideas?

2. What concepts, works, or authors does the chapter affirm?

3. What concepts, works, or authors does the chapter challenge?

4. Is there anything particularly unique or interesting about the approach in this chapter?

http://ccdigitalpress.org/dwae/

 
14 Comments

Posted by on November 26, 2013 in Uncategorized

 

2011 NAEP Writing Results

http://nationsreportcard.gov/writing_2011/

 
Leave a comment

Posted by on November 19, 2013 in Uncategorized

 

2013 NSSE Results

Here is the 2013 National Survey on Student Engagement report:

http://nsse.iub.edu/NSSE_2013_Results/index.cfm

 
Leave a comment

Posted by on November 19, 2013 in Uncategorized

 

Common Core Report

Here’s an interesting report on the Common Core Standards:

http://blogs.edweek.org/edweek/curriculum/Standardized%20Testing%20and%20the%20Common%20Core%20Standards_FINAL_PRINT.pdf

 
Leave a comment

Posted by on November 18, 2013 in Uncategorized

 

Broad, Bob: “More Work For Teacher?: Possible Futures of Teaching Writing in the Age of Computerized Assessment”

Rank

I think Bob Broad would fall around 3. At this point, he’s opposed to AES because it doesn’t assess what we’re trying to teach students, but, if at some point AES can assess rhetorical choices (audience, tone, etc.), then he wouldn’t mind considering using it in his classes.

 

Argument 

Broad says that one of the biggest selling points that the makers/marketers of AES are espousing is that it will save the instructor so much time, and they can then use that extra time to do other more important things (Broad compares this argument to the one made by the people who marketed vacuums to women in the 20th century). While these items do actually make tasks easier, the result isn’t free time, it’s time you’re then forced to use for other things. In the case of the instructor, it’s teaching students how to write for the machine, not how to actually write. Broad closes the essay by saying he isn’t totally, 100% opposed to AES. If they can make a machine that can actually assess rhetorical choices instead of just facts and sentence structure, then he would consider using it in the classroom.

 

Assumptions

  • That what we’re trying to teach students about writing (i.e. how to make rhetorical choices effectively) are much more important than things like paragraph length, sentence structure, and being 100% factual. 
  • That ETS only claims to listen to instructors about what they want/need, and instead tries too hard to shape what teachers are doing.
  • That the time freed up by using AES would immediately be redirected into some other non-essential task.

 

Points of Interest

  • I thought the AES : Teachers :: Vacuum/Stove : Women example that Broad starts this essay with was a really interesting comparison. Not only for the explicit reasons that Broad states, but also because it subtly paints AES as being something that merely handles an unpleasant task as opposed to actually benefitting students/teachers.
  • I also liked the way that Broad remained hopeful that one day AES could actually be useful, it just needs to focus on what we’re actually teaching, not just what ETS is capable of assessing at the moment.
 
Leave a comment

Posted by on November 14, 2013 in Uncategorized

 

Haswell, Richard. “Drudges, Black Boxes, and Dei Ex Machina.”

Position on AES: 6. Rejects machine scoring for assessment, but not for placement, and suggests developing our own (as a field) software.

Haswell admits that we, as a field, are complicit in the popularization of machine scoring, since we’ve positioned writing assessment as “drudgery” (59). He provides a partial history of computer scoring. He argues that machine scoring software represents a “black box” (we don’t always know how it works), but that it mimics some of our own practices as teachers (73). He claims that we should come up with ways not only to critique machine scoring, but to resist it (76). But he concedes that machine scoring might be useful in placement (77).

Assumptions:

  • Assumes that using software for placement is less damaging than using it for assessment.
  • Assumes that we, as a field, are complicit in the creation and popularization of machine scoring software.

Points of interest:

  • Haswell suggests that we (as a field) write our own software (76). Maker culture!
  • He wants administrations to bear the burden of proof for adopting new software (76).
 
1 Comment

Posted by on November 14, 2013 in Uncategorized

 

“Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings?” by William Condon (from Assessing Writing 18 (2013); pages 100–108)

Rank: 0 (“Because these tests underrepresent the construct as it is understood by the writing community, such tests should not be used in writing assessment, whether for admissions, placement, formative, or achievement testing” (Condon 100).

Argument: The real problem is that what AES allows us to do (conveniently assess masses using one standard) is “too constraining” and “severely under-represent[s] the construct, writing, yet purport[s] to measure that construct effectively” (101). Basically, a 25 minute (or slightly more or less) assessment (scored by either human or machine) is not enough to produce a good assessment (101 and 103).

Assumptions: Machines can offer superficial feedback on things like grammar and syntax because they can count, but they are of no use until they can understand and assess content (102). Using them pervasively is unwise (102)

Points of interest:

  • We’re in danger of compromising what is a rich construct for the sake of profit. What is that going to do to our students?
  • Condon’s great validity table/chart is on page 104.
  • He suggests that placement should be non-vertical; that is, instead of sorting students into places on a curricular totem pole-like spectrum (with remedial at the bottom, traditional in the middle, and honors at the top), Condon advocates a system where we adhere more to a traditional curriculum, and give remedial students supplemental work and help (106).
 
Leave a comment

Posted by on November 14, 2013 in Uncategorized