RSS

Digital Writing Assessment and Evaluation

Since this book didn’t come out until after class started, I didn’t get a chance to include it in the syllabus, though it’s clearly the perfect focus for our class. So on this, our last day of content this semester, I want us to spend some time seeing what we missed. Each of us will skim a chapter in class and post a “comment” that responds to the following questions:

1. On what content does this chapter build to frame or situate the ideas?

2. What concepts, works, or authors does the chapter affirm?

3. What concepts, works, or authors does the chapter challenge?

4. Is there anything particularly unique or interesting about the approach in this chapter?

http://ccdigitalpress.org/dwae/

 
14 Comments

Posted by on November 26, 2013 in Uncategorized

 

2011 NAEP Writing Results

http://nationsreportcard.gov/writing_2011/

 
Leave a comment

Posted by on November 19, 2013 in Uncategorized

 

2013 NSSE Results

Here is the 2013 National Survey on Student Engagement report:

http://nsse.iub.edu/NSSE_2013_Results/index.cfm

 
Leave a comment

Posted by on November 19, 2013 in Uncategorized

 

Common Core Report

Here’s an interesting report on the Common Core Standards:

http://blogs.edweek.org/edweek/curriculum/Standardized%20Testing%20and%20the%20Common%20Core%20Standards_FINAL_PRINT.pdf

 
Leave a comment

Posted by on November 18, 2013 in Uncategorized

 

Broad, Bob: “More Work For Teacher?: Possible Futures of Teaching Writing in the Age of Computerized Assessment”

Rank

I think Bob Broad would fall around 3. At this point, he’s opposed to AES because it doesn’t assess what we’re trying to teach students, but, if at some point AES can assess rhetorical choices (audience, tone, etc.), then he wouldn’t mind considering using it in his classes.

 

Argument 

Broad says that one of the biggest selling points that the makers/marketers of AES are espousing is that it will save the instructor so much time, and they can then use that extra time to do other more important things (Broad compares this argument to the one made by the people who marketed vacuums to women in the 20th century). While these items do actually make tasks easier, the result isn’t free time, it’s time you’re then forced to use for other things. In the case of the instructor, it’s teaching students how to write for the machine, not how to actually write. Broad closes the essay by saying he isn’t totally, 100% opposed to AES. If they can make a machine that can actually assess rhetorical choices instead of just facts and sentence structure, then he would consider using it in the classroom.

 

Assumptions

  • That what we’re trying to teach students about writing (i.e. how to make rhetorical choices effectively) are much more important than things like paragraph length, sentence structure, and being 100% factual. 
  • That ETS only claims to listen to instructors about what they want/need, and instead tries too hard to shape what teachers are doing.
  • That the time freed up by using AES would immediately be redirected into some other non-essential task.

 

Points of Interest

  • I thought the AES : Teachers :: Vacuum/Stove : Women example that Broad starts this essay with was a really interesting comparison. Not only for the explicit reasons that Broad states, but also because it subtly paints AES as being something that merely handles an unpleasant task as opposed to actually benefitting students/teachers.
  • I also liked the way that Broad remained hopeful that one day AES could actually be useful, it just needs to focus on what we’re actually teaching, not just what ETS is capable of assessing at the moment.
 
Leave a comment

Posted by on November 14, 2013 in Uncategorized

 

Haswell, Richard. “Drudges, Black Boxes, and Dei Ex Machina.”

Position on AES: 6. Rejects machine scoring for assessment, but not for placement, and suggests developing our own (as a field) software.

Haswell admits that we, as a field, are complicit in the popularization of machine scoring, since we’ve positioned writing assessment as “drudgery” (59). He provides a partial history of computer scoring. He argues that machine scoring software represents a “black box” (we don’t always know how it works), but that it mimics some of our own practices as teachers (73). He claims that we should come up with ways not only to critique machine scoring, but to resist it (76). But he concedes that machine scoring might be useful in placement (77).

Assumptions:

  • Assumes that using software for placement is less damaging than using it for assessment.
  • Assumes that we, as a field, are complicit in the creation and popularization of machine scoring software.

Points of interest:

  • Haswell suggests that we (as a field) write our own software (76). Maker culture!
  • He wants administrations to bear the burden of proof for adopting new software (76).
 
1 Comment

Posted by on November 14, 2013 in Uncategorized

 

“Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings?” by William Condon (from Assessing Writing 18 (2013); pages 100–108)

Rank: 0 (“Because these tests underrepresent the construct as it is understood by the writing community, such tests should not be used in writing assessment, whether for admissions, placement, formative, or achievement testing” (Condon 100).

Argument: The real problem is that what AES allows us to do (conveniently assess masses using one standard) is “too constraining” and “severely under-represent[s] the construct, writing, yet purport[s] to measure that construct effectively” (101). Basically, a 25 minute (or slightly more or less) assessment (scored by either human or machine) is not enough to produce a good assessment (101 and 103).

Assumptions: Machines can offer superficial feedback on things like grammar and syntax because they can count, but they are of no use until they can understand and assess content (102). Using them pervasively is unwise (102)

Points of interest:

  • We’re in danger of compromising what is a rich construct for the sake of profit. What is that going to do to our students?
  • Condon’s great validity table/chart is on page 104.
  • He suggests that placement should be non-vertical; that is, instead of sorting students into places on a curricular totem pole-like spectrum (with remedial at the bottom, traditional in the middle, and honors at the top), Condon advocates a system where we adhere more to a traditional curriculum, and give remedial students supplemental work and help (106).
 
Leave a comment

Posted by on November 14, 2013 in Uncategorized

 

“Writeplacer Plus in Place: An Exploratory Case Study” by Anne Herrington and Charles Moran (from Machine Scoring of Student Essays: Truth and Consequences, pages 114-127)

Rank: 2-3 (Herrington and Moran are suspicious enough to be opposed to the decision to implement Writeplacer, but they do see its upsides (cost/efficiency).

Argument: The researchers are convinced of Writeplacer’s cost-effectiveness as a placement exam scorer (or, at least, they see how it is a cheaper and more efficient method than hand-scoring), but they are troubled by the removal of human assessors for multiple reasons.

Assumptions: This is an interview-based case study of the implementation of Writeplacer Plus (as an entrance-level sorting device for newly-registered students) at Valley College). The researchers interviewed administrators, teachers, and students about the use of Writeplacer. Writeplacer, in this situation, was used as a one-time test (meant to act as a filter), and so, the administrators see no problem using it as a filter (since it’s not functioning within the classroom itself).

Points of Interest: 

  • The core (and fascinating) problem that the researchers isolate is that the students are writing not to humans, but to computers (a problem that extends beyond this study to the greater realm of AES) (114). They wonder how knowledge of this different audience could affect students’ rhetorical practice.
  • Administrators and faculty have distinctly opposed opinions about Writeplacer (the former think of Writeplacer as a great “filter” (124); the latter are suspicious and don’t want to be blindsided (because when they were hand-scoring exams, they could get an idea of what kind of writing they would have to work with). Students were more likely to shift between those positive and negative poles of opinion. Overall, they were shocked (specifically, some approved, some disapproved, but all wanted actual human teachers in their courses).
  • No matter what, everyone involved (essentially) was in favor of keeping AES out of the actual classroom (even though Writeplacer Plus was justified by administrators as a one-time “filter”).
 
Leave a comment

Posted by on November 14, 2013 in Uncategorized

 

Chaitanya Ramineni “Validating automated essay scoring for online writing placement”

Rating: 5; This is a quantitative study about AES as a way to assess placement tests, so this article feels like a more objective (or maybe just a less *explicitly* opinionated) piece of writing.

Argument: Based on a quantitative study with 879 participants under timed conditions, Ramineni found that students perform better with prompts that are tailored for specific universities, that the AES system, Criterion, provided for an assessment that better distinguished writing ability from general academic ability (measured by GPA), and testing conditions (proctored v. nonproctered conditions) had a statistically insignificant effect.

Assumptions:

  • AES is direct writing assessment
  • all digital platforms are the same; in other words, the Criterion platform mimics any other platform: Students can submit writing samples in a digital platform that reflects the contemporary communication environment (43)
  • AES is a learning support tool & that is always the case
  • scores generated from admissions test are not suitable for making placement decisions

Points of interest

  • She makes some important calls for further research based on what is missing in her study — issues with the sample
  • Her methods section is really thorough and a good example of method sections generally
 
Leave a comment

Posted by on November 13, 2013 in Uncategorized

 

Deane, Brent and Townsend

For this week, I read Paul Deane’s article “On the relation between between automated essay scoring and modern views of the writing construct” as well as Edward Brent and Martha Townsend’s chapter “Automated Essay Grading in the Sociology Classroom: Finding Common Ground.” On the whole, these two pieces leave me with a mixed view of AES. I think it can be useful, but within a narrow set of circumstances, and never as a means to completely replace a human reader. My analysis of each piece will treat the particular implications of each article.
Deane
Rank: 7
Arguments
Objections to AES are based on objections to the general construct of writing employed in standardized testing. Objections to AES, then, can be seen to speak less to any particular feature of the technology than to the construct of writing employed in the assessment of student writing.
Assumption
The writing construct of AES is a valid construct.

POI
This article raises interesting questions about the kind of writing construct we need to assume in order to oppose or defend AES.

Brent and Townsend
Rank: 5
Arguments
AES can be helpful, but only in limited situations and never as a replacement for a human reader.
Assumptions
AES challenges the value of writing as communication and contextual.
POI
Brent, a Sociology instructor, was able to use AES with some success in a large, 200 person introductory sociology class, where the focus is on key concepts and terms. But he explicitly mentions that he would not use AES in a more research-based class.

 
Leave a comment

Posted by on November 13, 2013 in Uncategorized