The Hamp-Lyons and Condon piece serves as a nice precursor to Yancey’s piece because they are working only with print portfolios (the piece was published in 1993). They conduct a study on the assumptions that readers bring to portfolio assessment because they are “committed” to portfolio assessment and want to understand more about “all we do, even when it is successful” (315). They write about the portfolio assessment program at Washington State University, and structure their article on the exploration of five assumptions, summarized briefly below.
1. “Because a portfolio contains more texts than a timed essay examination, it provides more evidence and therefore a broader basis for judgment, making decisions easier” (319). They dispute this claim by saying that readers cannot actually read portfolios holistically because of their containing multiple texts; these multiple texts “force readers to consider one text in light of the other” (319). Thus, decisions are more difficult because multiple texts and numerous characteristics of these texts must be considered.
2. “A portfolio will contain texts of more than one genre, and multiple genres also lead to a broader basis for judgments, making decisions easier” (319). They identify two underlying assumptions here–that quality of text will vary from genre to genre, and that a portfolio will have multiple genres. If quality does not differ, they say, then there is no viable reason to include multiple genres. If, on the other hand, it does differ, that makes readers’ decisions harder because they must move back and forth between texts, considering earlier texts in light of later texts. Additionally, they say that assumptions one and two seem to also assume that readers will attend to the entire portfolio, which they have not found to be the case. Typically readers move toward a decision while reading the first text and use the rest of their reading to support that decision. They also found that “readers tend to reduce the cognitive–and time–load in portfolio reading by finding short cuts to decisions” (322). Because of this, they claim that students who place their “best” text at the end of the portfolio might be doing themselves a disservice.
3. “Portfolios will make process easier to see in a student’s writing and enable instructors to reward evidence of the ability to bring one’s own text significantly forward in quality” (323). Again, they found this to not be the case because their portfolios did not contain evidence of process–meaning multiple drafts, notes on revisions, etc. If we want to see evidence of process, they advise requiring multiple drafts of a text rather than several “polished” pieces (showcase portfolio).
4. “Portfolio assessment allows pedagogical and curricular values to be taken into account” (324). They claim that this is the case in their assessment because the “connection between curriculum and portfolio is carefully and consistently built” (324). In other words, they create many opportunities for faculty to come together and discuss the portfolios, their pedagogies, the curriculum, etc. Without this kind of working environment, the portfolio may not represent pedagogical and curricular values.
5. “Portfolio assessment aids in building consensus in assessment and instruction” (325). Again, they claim that this is not necessarily always they case. The process of faculty working to find points of agreement and places of compromise is important for the success of this type of assessment.
Their ultimate conclusion is that portfolio assessment should be recursively revised based on faculty conversations, data from the assessment itself, knowledge of what does and does not seem to be working. They also conclude by debating the uses and limitations of external criteria.