Basic Email Assignment Assessment Rubric
Perhaps the most common type of assessment is one that identifies key features of a given communication task either because such features are critical to general success in the activity or because they are the pedagogical focus for particular learning. Such rubrics provide descriptive feedback rather than specific advice for student improvement. The individual factors can be weighted. Overall, an analytic rubric can reinforce valuable communication principles, suggest specific areas of strength and weakness, and provide the basis for future improvements and goal setting. No list of descriptive features, however, no matter how detailed, equates precisely to the overall communicative effect and thus should not be confused with holistic assessment.
When it is not possible or desirable to assess communication work based on independent features or when such features significantly overlap or interact, holistic assessment is the appropriate choice. Holistic rubrics typically focus on areas for improvement and provide qualitative feedback on designated competency levels.
While mixing analytic and holistic assessment approaches within a single rubric is possible, such a strategy lends itself all too easily to overpenalizing students for particular weaknesses or misleadingly suggesting that the designated features constitute an exhaustive list of communication concerns. In that sense, the more detailed and precisely weighted the rubric, the more it may distort any holistic assessment.
Sometimes a rubric needs to be quite specific because the learning objectives of the assignment or the subject of the student's work dictate a narrower focus. Whenever an assignment addresses objectives not covered by any other course assignment, the rubric needs to reflect that level of specificity. When several assignments share learning objectives (broad communication concerns about purpose, context, organization, etc.), then the rubrics likewise will feature these rhetorical principles. Here, too, rubrics can be hybrids that carry forward general communication concepts from other assignments while introducing new ones specific to the current assignment. General rubrics may extend beyond the classroom to express programmatic or even institutional assessment concerns.
The number of resources on assessment can be overwhelming. Moreover, many would argue that assessment is best developed locally, not only because it is naturally situated to the iindividual teacher, student body, and institution, but because the very process of creating rubrics helps build and refine learning objectives within a given community. Certainly the ERIC Clearinghouse on Assessment and Evaluation [ERIC/AE] can serve as a useful resource. Its Scoring Rubrics—Definitions & Constructions can be a useful starting point for those new to assessment rubrics or those seeking general resources on creating and using scoring rubrics. Internet searches can be narrowed by communication activity, subject, type of rubric, and educational level.
Brookhart, S. M. (1999). The Art and Science of Classroom Assessment: The Missing Part of Pedagogy. ASHE-ERIC Higher Education Report (Vol. 27, No.1). Washington, DC: The George Washington University, Graduate School of Education and Human Development.
Chicago Public Schools (1999). Rubric Bank.
Danielson, C. (1997a). A Collection of Performance Tasks and Rubrics: Middle School Mathematics. Larchmont, NY: Eye on Education Inc.
Danielson, C. (1997b). A Collection of Performance Tasks and Rubrics: Upper Elementary School Mathematics. Larchmont, NY: Eye on Education Inc.
Danielson, C.; & Marquez, E. (1998). A Collection of Performance Tasks and Rubrics: High School Mathematics. Larchmont, NY: Eye on Education Inc.
Delandshere, G. & Petrosky, A. (1998) "Assessment of complex performances: Limitations of key measurement assumptions." Educational Researcher, 27 (2), 14-25.
ERIC/AE (2000a). Search ERIC/AE draft abstracts.
ERIC/AE (2000b). Scoring Rubrics - Definitions & Construction.
Gay, L. R. (1987). "Selection of measurement instruments." In Educational Research: Competencies for Analysis and Application (3rd ed.). New York: Macmillan.
Haswell, R., & Wyche-Smith, S. (1994) "Adventuring into writing assessment." College Composition and Communication, 45, 220-236.
Knecht, R., Moskal, B. & Pavelich, M. (2000). The design report rubric: Measuring and tracking growth through success. Proceedings of the Annual Meeting American Society for Engineering Education, St. Louis, MO.
Leydens, J. & Thompson, D. (August, 1997), Writing Rubrics Design (EPICS) I, Internal Communication, Design (EPICS) Program, Colorado School of Mines.
Moskal, B. M. (2000). Assessment Resource Page.
Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation, 7 (3).
Moskal, Barbara M. & Jon A. Leydens (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation, 7(10).
Rafilson, F. (1991). The case for validity generalization. Practical Assessment, Research & Evaluation, 2 (13).
Schrock, K. (2000). Kathy Schrock's Guide for Educators.
State of Colorado (1998). The Rubric.
Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50, 483-503.