NOTE: Because of the size of the file, we’re linking to it here instead of embedding it! Enjoy!
Also: web-based full-screen version
Assessment Timeline Process Memo
Jason Custer and Kendall Parris
For our timeline project, we wanted to discuss some of the unique features that belonged to certain machine scoring systems. These different features can, arguably, constitute a kind of technological identity that is unique to each system; thus, we spent a good while trying to figure out the best way to formulate a visual that would represent both a time spectrum and the technologies’ different “identities” simultaneously. After some brainstorming, and a brief recollection of a similar historical parody one of us saw on the internet some months before (you can see it here: http://www.collegehumor.com/facebook-history), we decided that creating a spoof Facebook wall (including a series of posts and discussions) could provide us with the ability to visualize both the advent and individuality of each of the ten machine scoring technologies we chose to represent.
In an effort to create believable Facebook posts, we incorporated Facebook vernacular, emoticons, carefully-selected profile pictures, hashtags (because for some reason Facebook has integrated them…), et cetera into our project. We also tried to isolate the core concept or ability that individualized each technology so that we could use it in a post to define that particular system as a kind of Facebook “personality.” Sometimes we put the systems into conversations with each other in the comments box to help elucidate a particular system’s function. (and sometimes we did this just for fun). Also, instead of “likes.” we made it so that that section of the post relayed validity percentages.
Additionally, to represent the relationships between software (“better,” “worse,” “more/less effective,” etc.) we found traditional means of representation (proximity, size, color) lacking, since these measures do not immediately visually indicate the benefits of each piece of software. As a result of choosing the Facebook theme for our work, we were able to use a somewhat discrete method to signify “performance” by using Facebook’s built-in, already somewhat vague “like” system. By equating Likes to Performance, we were able to visually represent an otherwise abstract concept for which traditional visual arrangement seemed lacking. In this way, like the measures of performance and validity themselves, the numbers are presented and left for the reader to determine the value of beyond the simple numbers presented here.
Bibliography and Works Cited
Burstein, Jill, Claudia Leacock, and Richard Swartz. “Automated Evaluation of Essays and Short Answers .” Loughborough University (2001): 1-13.
Hearst, Mari A. “The Debate on Automated Essay Grading.” IEEE Intelligent Systems & Their Applications 15.5 (2000): 22.
Herrington, Anne, and Charles Moran. “What Happens when Machines Read our Students’ Writing?” College English 63.4 (2001): 480.
Jordan, Sally. “E-Assessment: Past, Present and Future .” The Open University: Pedagogic Directions: 1-20.
Leacock, Claudia, and Martin Chodorow. “C-Rater: Automated Scoring of Short-Answer Questions.” Computers & the Humanities 37.4 (2003): 389-405.
Mitchell, Tom, et al. “Towards Robust Computerised Marking of Free-Text Responses.” Loughborough University (2002): 233-249.
Valenti, Salvatore, Francesca Neri, and Alessandro Cuchiarelli. “An Overview of Current Research on Automated Essay Grading.” Journal of Information Technology Education 2 (2003).
Werner, Gergory J. “A Complete Approach to Automated Essay Grading .” George Washington University: Department of Computer Science: 1-6.
Zhang, Mo. “Contrasting Automated and Human Scoring of Essays.” Educational Testing Service: R & D Connections 21 (2013).