RSS

Digital Writing Assessment and Evaluation

26 Nov

Since this book didn’t come out until after class started, I didn’t get a chance to include it in the syllabus, though it’s clearly the perfect focus for our class. So on this, our last day of content this semester, I want us to spend some time seeing what we missed. Each of us will skim a chapter in class and post a “comment” that responds to the following questions:

1. On what content does this chapter build to frame or situate the ideas?

2. What concepts, works, or authors does the chapter affirm?

3. What concepts, works, or authors does the chapter challenge?

4. Is there anything particularly unique or interesting about the approach in this chapter?

http://ccdigitalpress.org/dwae/

Advertisements
 
14 Comments

Posted by on November 26, 2013 in Uncategorized

 

14 responses to “Digital Writing Assessment and Evaluation

  1. DB

    November 26, 2013 at 4:38 pm

    Ch. 9: Toward a Rhetorically Sensitive Assessment Model for New Media Composition (Crystal VanKooten)

    1. On what content does this chapter build to frame or situate the ideas?
    Uses Neal (2011) and Huot (2009) to frame the piece. She also uses Penrod (2005) and Whithaus (2005).

    2. What concepts, works, or authors does the chapter affirm?
    Local assessment, self-assessment, reflection, re-articulating course objectives.

    3. What concepts, works, or authors does the chapter challenge?
    Criticizes Allison (2009) for “perfection” term, suggesting that student work should move toward flawlessness. She suggests that Neal extend his concepts of organization, arrangement, clarity to account for shifting meanings in new media.

    4. Is there anything particularly unique or interesting about the approach in this chapter?
    I like the way she suggests students set process goals that include learning certain platforms or software. I feel like the features and logic parts of the model might try to encompass too much. How much of that can she productively teach?

     
  2. brucebowlesjr

    November 26, 2013 at 4:41 pm

    Angela Crow: “Managing Datacloud Decisions and “Big Data”: Understanding Privacy Choices in Terms of Surveillant Assemblages”

    1. On what content does this chapter build to frame or situate the ideas?

    This chapter explores the use of “big data” in relation to issues of students privacy. Essentially, it posits questions about the ethical issues regarding privacy in relation to “big data.”

    2. What concepts, works, or authors does the chapter affirm?

    This chapter starts out with an epigraph from the work of our very own Dr. Neal. Crow also weaves herself into conversation with other assessment scholars such as Huot, O’Neill, Broad, etc. Her chapter also draws upon the work of Nissenbaum, contending that privacy is a contextual issue and that ethical implication in this regard must be considered in context. She seems to be advocating for local assessment and valuing the student and classroom rather than the accumulation of “big data.”

    3. What concepts, works, or authors does the chapter challenge?

    Crow seems to be challenging the use of “big data” systems, specifically those of corporations that use data-mining for profit purposes and to customize their applications. Essentially, Crow sees these corporate offerings as dangerous, especially when students are forced to use them at the institutional level.

    4. Is there anything particularly unique or interesting about the approach in this chapter?

    Two aspects of this chapter really jumped out to me. For starters, I was intrigued by her “privacy as contextual” approach. This was a rather unique way of looking at a debate that is frequently framed in absolutes.

    I also thought her suggestion for Rhetoric and Composition to develop its own “big data” software was quite intriguing, even though logistically this seems quite problematic.

     
  3. andrewdavidburgess

    November 26, 2013 at 4:49 pm

    Chapter 4: “Rewarding Risk” (Reilly and Atkins)

    1. On what content does this chapter build to frame or situate the ideas?
    This chapter builds on works on multimodal assessment, but adds a level of “risk” and risk-taking for students and “aspirational” assessment for instructors. The authors use “deliberate practice” as a model for creating open-ended assignment criteria that encourage individual and collaborative exploration. This model has as its goals encouraging exploration and aligning assessment practices with pedagogical practices.

    2. What concepts, works, or authors does the chapter affirm?
    • Brian Huot: using the term “assessment” to describe a process that “seeks to blur boundaries between formative and summative feedback and between instruction and evaluation.”
    • Bob Broad: assertion that multimodal assessment practices should be created for specific projects, thus making them “localized and contextual” and, again, “aspirational.”
    • Michael Neal: Maximizing the usefulness of our assessment practices means aligning them to our values as teachers. In Neal’s words, we should “assess our assessments by continuing our rich dialogue about the purposes of composition and what students should be able to know, think, and accomplish after taking our classes.”
    • Also Neal: discussing the ways in which technologies become familiar, allowing them to grow more transparent. We must embrace the current “kairotic” moment in our assessment practices.
    • Ericsson: The authors build on Ericsson’s ideas of deliberate practice, which involves giving immediate feedback on performance while guiding students through scaffolded assignments and challenging them to grow and develop new proficiencies.

    3. What concepts, works, or authors does the chapter challenge?
    • Lee Odell and Susan Katz: assertion that multimodal assessment should be “generalizable and generative.” To this, the authors add that multimodal assessment should also be “aspirational, prompting students to move past the skills they have already learned to bravely take on unfamiliar tasks and work with new tools and applications that may cause them to re-vision their composing practices. So, they’re not so much challenging Odell and Katz as adding to them; but the idea of generalizability seems to push back pretty hard against their assertion that assessments should be designed for individual assignments.
    • He’s not mentioned, but they do push back against Ed White’s concept of grading only in terms of the reflective memo on multimodal projects.
    • Michael Neal: Again, not necessarily a challenge, but an addition to Neal’s four criteria for responding to “hypermedia,” which they argue are useful and productive, but do not address “how to encourage risk-taking and experimentation in conjunction with or through assessment processes.”

    4. Is there anything particularly unique or interesting about the approach in this chapter?
    The authors propose a formative, rather than summative, approach to assessment. This means that composition students are taught about assessment and included in the creation of assessment criteria at the outset of an assignment. The thought here is that students will approach their assignments and take risks, knowing from the beginning what assessment model will be used while also knowing that they, themselves, had a hand in determining the assessment used.

    I find Reilly’s guidelines for practicing aspirational assessment particularly interesting and, perhaps, helpful for future use in my own classroom:

    • Allow time for play, exploration, and gains in proficiency prior to the discussion of assessment for a particular project.
    • Look at (preferably externally identified) examples of excellent projects.
    • Develop criteria in groups after reviewing the project description, client needs (if relevant), and the course student learning outcomes pertinent to the project.
    • Allow student criteria to stand even if you, as the instructor, would have chosen other items on which to focus.
    • Make room for peer review and revision time following the development of the assessment criteria.

     
  4. jeffnaftzinger

    November 26, 2013 at 4:49 pm

    Composing, Networks, and Electronic Portfolios: Notes toward a Theory of Assessing ePortfolios — Yancey, McElroy, and Powers

    1. This chapter builds on concepts from Yancey, Yancey, and Yancey…. but also scholars like Latour (regarding networks), Condon and Hamp-Lyons (on how graders read print portfolios), Rice (on how portfolios help us make connections that other writings do not), and Barnhardt (on how print and eportfolios differ).

    2. The work affirms much of Yancey’s earlier work on ePortfolios, but also Condon and Hamp-Lyon’s earlier work on print portfolios, most importantly the argument that “a great deal is still unknown about what portfolios do and, perhaps even more interestingly, about the nature of the role and activities we, as teachers and readers, engage in during portfolio assessment.”

    3. I’m not exactly sure, but I think it would challenges the work of those who are in favor of standardized tests or those who are opposed to portfolios. This text shows how portfolios allow us not only to assess more of the students work than something like timed writing, but it also allows us to see how the student writes for, and in, different contexts. It offers a more context based, personal, capacious view of a student’s writing, a view which is important to fully understand and assess the student’s writing ability.

    4. The authors showed that the act of creating an eportfolio allows the composer of the portfolio, and the readers of the portfolio, to make connections and find patterns that they might have previously considered. In order to trace these networks/patterns/connections, the authors used the idea of architectural pin-ups (where they took screenshots of the site, and pinned them up on the wall), and clustered the pages around subject, linking, etc. This allowed them to see how the composer of the eportfolio navigated different rhetorical spaces within the portfolio.

     
  5. Michael Neal

    November 26, 2013 at 4:52 pm

    Chapter 12 “Assessing Learning in Redesigned Online First-Year Composition Courses”
    by Tiffany Bourelle, Sherry Rankins-Robertson, Andrew Bourelle, Duane Roen

    1. How is it framed?
    This chapter is framed with a problem… that of a $200 million budget cut at Arizona State University and a call by the administration to develop ways to deliver education that were less expensive without compromising the integrity of the courses. The chapter lays out how the composition program at ASU revamped their composition courses to deliver the curriculum via technology in large section (150 students) with smaller cohorts (20 students) in either a 7 1/2 or 15 week block. Students are still required to write multiple drafts of papers, which they include in a final electronic portfolio that is scored with the “Quality Matters” rubric. (Their version of this can be found at http://ccdigitalpress.org/dwae/files/12_Bourelle_Appendix_2.pdf)

    2. What does this assessment affirm?
    They are building off of the WPA Outcomes Statement and especially the “Habits of Mind” in the Framework for Success in Postsecondary Writing. The portfolio requires that students show evidence of–I believe–6 outcomes defined by the program using these documents. They submit versions of this early to receive feedback, so the assessment seems to value formative feedback, even in the large sections of the class. They have several paragraphs and sections that lay out the values associated with reflection and self assessment, which is an integral part of the success of the portfolio. In fact, the metacognitive element in included in the rubric.

    The assessment also affirms the use of technologies to make education cheaper and more efficient. They make some move to making sure the course is still accessible and personal (they include several videos from the curriculum to show instruction and directions), but they don’t challenge the initial budget cuts or the move to technology to deliver cheaper content. They are keeping track of student success through a quantitative four point rubric that has very general, descriptive elements. It’s not clear to me who comments on the students’ essays and portfolios throughout the 7 1/2 or 15 week term and/or if the portfolio receives anything other than the quantitative rubric response. They also don’t make a big deal of reliability of the rating process.

    3. What does it challenge?
    It really doesn’t challenge much: not rubrics, the electronic delivery of the course, accessibility, etc. It draws on pre-existing values defined by the community and builds on them for their own purposes.

    4. What’s interesting?
    I was curious about the “Quality Matters” rubric because they claim it’s a validated rubric. It’s not available to see without payment, and at a glance, I didn’t see how they validated it. Based on what use? For what purposes?

     
  6. jacobwcraig

    November 26, 2013 at 4:52 pm

    Susan H. Delagrange, Ben McCorkle, and Catherine C. Braun. “STIRRED, NOT SHAKEN: AN ASSESSMENT REMIXOLOGY.” _Digital Writing: Assessment and Evaluation_

    1. On what content does this chapter build to frame or situate the ideas?

    This chapter builds on the work about assessing digital texts — Penrod (2005), Huot (2007), Ball (2012), Manion and Selfe (2012) that proliferated after KY’s call for reunderstanding what we think counts as writing. This chapter works within a frame of remix, rhetoric, cultural criticism, design, and IP law. This chapter is specifically about grading: the kind of assessment criteria, the source of assessment criteria, and the function of assessment criteria when looking at remix projects.

    2. What concepts, works, or authors does the chapter affirm?

    The chapter affirms Lessig’s ideas about remix; an application of Bob Broad’s dynamic criteria mapping (the same process that Whithaus discusses w/o citation); and confirms a preferece toward the local over the national (Huot 2002). The chapter reaffirms the value of fair use as a commonplace in the classroom. McCorckle uses fair use as a heuristic, as a frame for feedback, and for assessment during a mash-up/remix project. These authors think that assessment is a part of improving teaching and learning.

    3. What concepts, works, or authors does the chapter challenge?

    Programmatic rubrics, originality in authorship

    4. Is there anything particularly unique or interesting about the approach in this chapter?

    The chapter is a multi-authored project with three sections by three different scholars teaching three different courses in three different institutions. This chapter provivides interesting discussions with examples about what counts as a transformative work. Then, they rejoin to inductively come to four things that they value about assessment: flexibility, transparency, buy-in from students (flattening the teacher-student hierarchy through student partcipation), generative insofar as assessments should encourage critical thinking.

     
  7. jeskew2013

    November 26, 2013 at 4:53 pm

    Chapter 10
    Assessing Civic Engagement: Responding to Online Spaces for Public Deliberation

    1. This chapter discusses the principles of designing and evaluating online civic spaces, spaces where people engage with their community.

    2.This chapter draws on the concepts of “productive usability” and “catalytic validity.” Productive usability suggests that at all stages, from the conceptual to the finished product, these spaces can be designed with a particular use in mind and that that use should be the priority. Catalytic validity speaks to the capacity for research projects to to engage people through participation.

    3. The chapter opens with a parody of the way university websites are designed. In a Venn Diagram which compares what people find on a university web site with what they need from a university website, the only common factor is the name of the university. Through this parody, the authors suggest that designers of websites meant for public use are often grossly unaware of what people actually want to use such sites to accomplish.

    4. The assignment that they discuss, called the Online Design Project, was an interesting one. It required students to create their own civic space from the principles that they discussed in class, particularly productive usability.

     
  8. amypiotrowski

    November 26, 2013 at 4:54 pm

    Chapter 3 – Seeking Guidance for Assessing Digital Compositions/Composing by Moran and Herrington

    1. This chapter builds on the work of teachers whose students create digital compositions, namely Kevin Hodgson and Paul Allison. Moran and Herrington also discuss frameworks and principles for assessment laid out by NCTE and NWP.
    2. This chapter affirms the idea that good writing is crafted to reach an authentic audience. Neither of the digital projects described are projects for the teacher’s eyes alone. Peer assessment and self-reflection are important elements of assessment in Hodgson’s and Allison’s classrooms.
    3. This chapter challenges the usefulness of the Six Traits+1 framework that has been very popular in K-12 writing assessment. The Six Traits may not be helpful when thinking about the skills needed and the processes used to compose multimedia projects.
    4. What’s unique about this chapter is that it gives us projects, rubrics, and principles for assessment that are being used in real classrooms. The chapter links to projects created by Hodgson’s students so that the reader can view them and get a clearer idea of what Hodgson’s students are doing and what Hodgson’s values when he assesses these projects. The chapter also links to the Youth Voices page that Allison coordinates.

     
  9. jasonecuster

    November 26, 2013 at 4:54 pm

    Chapter 5: Emily Wierszewski, “Something Old, Something New”: Evaluative Criteria in Teacher Responses to Student Multimodal Texts

    1. On what content does this chapter build to frame or situate the ideas? As an empirical study, this piece builds on the statements of folks like Takayoshi and Huot to address the relative lack of data-based scholarship. This study aims to answer the question: “What print values do teachers use when they assess multimodal work, and what kinds of criteria seem to be unique to new, multimodal pedagogies?”

    2. What concepts, works, or authors does the chapter affirm? Builds on Michael Neal’s (woo!) work among others to suggest that we cannot and should not simply treat digital texts the same as print-based ones.

    3. What concepts, works, or authors does the chapter challenge? It challenges the very work it builds on by showing the ways in which most instructors assessing student writing still discuss print-based elements more so than others.

    4. Is there anything particularly unique or interesting about the approach in this chapter? In their examination of the frequency of types of comments in the study conducted here, they note that “grammar” was still being discussed in the comments on multimodal texts. Specifically, in the teachers with more experience, whereas other assessors did not mention grammar at all. The prevalence of these kinds of comments in more experienced teachers intrigued me, as it suggests that elements of print assessment bleed over into assessing multimodal texts when they begin/began as our focus, but for newer teachers that digital texts might be more “native,” this doesn’t happen quite as much/often.

     
  10. profkelp

    November 26, 2013 at 4:56 pm

    “Re-Mediating Writing Program Assessment” by Karen Langbehn, Megan McIntyre, and Joe Moxley

    [This essay is an exploration of the effectiveness of USF’s Freshman Comp assessment methodology. They use a program called My Reviewers, which allows them to digitally collect, distribute, and assess student writing all in one place (in an attempt to “close the assessment loop” (teacher response, student performance/teacher assessment, and program assessment) because students and teachers can work together in real time, within this digital environment). Not only do the teachers use it, but the students use it to respond to the teachers and also to critique each others’ work. There are over 80,000 student essays in this database, and all of their freshman comp classes use it.]

    1. On what content does this chapter build to frame or situate the ideas?
    The assessment in question here is situated in the classroom. From what I can tell, the software and other methods USF uses are meant to assess the students’ performances in the course. The program is not for entrance/exit exams or placement.

    2. What concepts, works, or authors does the chapter affirm?
    There is a reliance on standardization and use of departmentally-distributed rubrics. This program also seems to value inter-rater reliability highly (it’s a condition for the program to be working properly; that is, they all abide by the same standards of the rubric) The philosophy of this program seems to fall right above or in the grey section on Condon’s chart (letter-graded, judgment-oriented, medium-to-large scale, with contextualized criteria). Their use of analytics to explore the effectiveness of the assessment system makes me want to say that they have a definitively more psychometric approach than we do, here at FSU.

    3. What concepts, works, or authors does the chapter challenge?
    This kind of approach does not allow for the same amount of freedom and creativity and trust to be given to the comp teachers in terms of letting them design their own course and assessment approaches/ideologies. We have, by contrast, a lot more freedom here at FSU.
    That being said, their approach entails that, by the end of the course, they have a digital version of what would be the students’ writing portfolios. I don’t know how the papers are formatted and organized within the system, but the outcome is that they’re all in one place, allowing the teacher to see how the student has progressed (and the authors mention that portfolio-like ability to see the scope of the students’ progress as a plus about My Reviewers).

    4. Is there anything particularly unique or interesting about the approach in this chapter?
    Within the database (which is closed to the public), the grades for all USF comp courses are visible to other USF comp teachers, for the sake of comparison. The authors claim that this might be controversial, but it is helpful because this allows the teachers to see other grades and grading data from other classes to see how their colleagues are assessing. That is, they’re reforming their own methodology by comparing their practice to others teachers’.

     
  11. jc12t

    November 26, 2013 at 4:56 pm

    Chapter 1: Making Digital Writing Assessment Fair for Diverse Writers by Mya Poe

    1. On what content does this chapter build to frame or situate the ideas?

    Poe specifically looking at digital writing assessment in large-scale assessments specifically at the state level. She is making the argument that “we have few validation studies of digital writing assessment to tell us about the impact of those assessments on students of color, working class students, and students with disabilities.” And, in fact, she argues that assessment guidelines often focus assessments away from diversity—she calls these assessments color-blind. Her project seeks to observe the relationship that construct, consequence and fairness play in digital writing assessment.

    2. What concepts, works, or authors does the chapter affirm?

    Poe situates herself within ongoing conversations within certain areas of study; here are a few concepts and who she draws upon:

    Writing as/with technology: Micheal Neal, George Madaus, Adam Banks

    Validity: Beverly Moss, Samuel Messick,

    Race and Writing Assessment (She draws upon several authors of the book she co-edited): Asaou Inoue, Diane Kelly-Riley (though not her chapter specifically), Anne Herrington/Sarah Stanley, and (indirectly) Behm/Miller (color-blindness)

    I also think she is responding to Behizadeh and Engelhard’s call for more communication across educational theory and writing studies.

    3. What concepts, works, or authors does the chapter challenge?

    The only authors that come to mind would be Arnetha Ball and Delpit—I don’t think Ball and Delpit would share in Poe’s idea of fairness because Poe is advocating for inclusive assessment guidelines but Ball and Delpit (from the limited time I’ve spent with their stuff) would want students of color to adapt.

    4. Is there anything particularly unique or interesting about the approach in this chapter?

    Poe takes The Standards of Educational and Psychological Testing and places some of those standards in a frame of reference to begin to think about digital writing assessment that takes into account race (and maybe culture, but she doesn’t say that specifically). She notes that some writing assessment folk (Huot, O’Neill, Gallagher, Kelly-Riley, Inoue) embrace these standards and I think Poe does as well, but is willing to think it through for her context specifically.

     
  12. sarahm1320

    November 26, 2013 at 4:56 pm

    Ch. 11 “Digital Writing Assessment and Evaluation…” Brunk-Chavez and Fourzan-Rice

    1. They assume that 21st century composing includes digital, multi-modal texts, and that we need assessments to match. It also seems to be interested in data mining for program assessment purposes, although I am not sure yet how they will accomplish this. They are also interested in providing multiple forms of feedback for students. Their programmatic assessment isn’t used to single out instructors, but rather to assess students and allow for curriculum and classroom practices to be revised.

    2. Thus far, they haven’t explicitly cited Whithaus, but they seem to be building on some of the ideas that he articulates: 21st century composing is often digital and multimodal, and we need assessments that can assess this kind of work. They cite some authors that I am not familiar with for this idea: “the changes wrought in writing with technology would produce different writing, and that different writing would call for different assessment methods” (Herrington, Hodgson, & Moran, 2010, p. 204). I think they might also be building on Condon’s call for writing assessment as a generative practice, since they are making the assessment serve multiple purposes, although they don’t necessarily seem interested in researching writing assessment itself, as Condon suggests, but rather studying it so they can see how to do it in their own context better. They quote Dr. Neal in discussing how programmatic and classroom assessment can become disconnected from teaching.

    3. Thus far, I don’t think that they are explicitly challenging anyone, although I am sure that there are plenty of people that would disagree with their work.

    4. I think that their emphasis on the integration of teaching and assessing is a solid idea; I like that their solution to their perceived problems was holistic and included the instructors: “Our answer was a digital assessment system created, maintained, and monitored by our instructors. To productively integrate evaluation and assessment results with instruction, every component of the redesigned course—learning outcomes, assignments, professional development, rubrics, and the evaluation and assessment process itself—had to be parts of a cohesive whole.”

    Instructors evaluate each others’ students – this seems interesting. I’m not sure how it will play out. Hmm now that I have read farther, it seems that first year TAs without teaching experience will be doing the evaluating.

    It seems that the development and use of the rubrics are done through discussion, rather than some of the oppressive “norming” techniques that we have discussed. However, they don’t really describe how they come to consensus.

    I do like that this assessment is used for both formative and summative evaluation – students can receive feedback on their drafts so they have the opportunity to revise.

     
  13. E Workman

    November 26, 2013 at 5:00 pm

    Chapter 7, (MAP) Group: “Developing Domains for Multimodal Writing Assessment: The Language of Evaluation, the Language of Instruction Multimodal Assessment Project”

    1. On what content does this chapter build to frame or situate the ideas?

    The authors claim to be coming at assessment from a different angle–focusing on the relationship between reader and writing. They argue that focusing specifically on five domains: “(1) artifact, (2) context, (3) substance, (4) process management and technique, and (5) habits of mind” can lead to “discussions more often associated with interaction, instruction, and text creation than with evaluation.” In this sense, I think we could see this work building on Whithaus’s discussion of providing formative feedback. The authors also frame their work as a responding to timed-essay tests inter-rater reliability, in that those concerns “trump” issues of validity and context.

    2. What concepts, works, or authors does the chapter affirm?

    They affirm Inoue and Lynn’s work on the inability of many writing assessment practices to “acknowledge the diversity of forms of writing in the real world” or to “attend to context, audience, and purpose.” They also cite Neal and Whithaus to show that “digital and multimodal forms of writing push back on decontextualized approaches to evaluation and assessment.” In addition to these authors, they affirm the work of Moss, Broad, and Broad et al. in that those pieces provide favorable alternatives to decontextualized assessment practices. Finally, they close by echoing a sentiment that we see in Neal’s book: that the student and student learning should be the focus of the evaluation, and that data collection should be a secondary concern.

    3. What concepts, works, or authors does the chapter challenge?

    The piece challenges timed-essay and standardized assessments, and it also seems to push back on the idea of a rubric that is delivered from the top-down. In other words, the authors are advocating for an assessment approach in which students and teachers collaboratively design assessment criteria. This philosophy also seems as though it could be used to challenge common core because it’s advocating for local, student-centered assessments, which necessarily vary from place to place.

    4. Is there anything particularly unique or interesting about the approach in this chapter?

    I think the five domains that the authors suggest are useful in that they provide teachers with a touchstone for assessment without necessarily suggesting stringent categories or practices. For each domain, the authors analyze an example and discuss how that student work demonstrates criteria specific to the domain.

     

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: