Assessing Written Communication Skills in STEM Courses

Authors

  • Beth Johnson George Mason University

DOI:

https://doi.org/10.13021/G8itlcp.10.2018.2241

Abstract

The use of ârealâ data to motivate students and the use of statistical software to analyze this data has become an essential component to most post-secondary introductory statistics courses. The American Statistical Association (ASA) guidelines note that strong communication skills complement technical knowledge and that students should be given frequent opportunities to refine their communication skills and that this should be tied directly to instruction in technical skills. The Guidelines for Assessment and Instruction in Statistics Education (GAISE) College Report (2016) also documented the need for post-secondary students to be able to integrate real data with a context and purpose and the need for appropriate assessments to improve and evaluate student learning. The GAISE report lists nine goals of an introductory statistics class and the majority of these goals require assessments of the studentsâ statistical understanding through their written communication. For example, âstudents should be able to interpret and draw conclusion from standard output from statistical softwareâ would require students to write a conclusion for some type of analysis. 
Each semester there are approximately 900 students enrolled in STAT 250 at GMU and each student completes four data analysis assignments (DAs). These assignments require students to use statistical software to analyze a data set or draw a conclusion from a simulation and provide written responses to questions. Each of these each DAs contain four questions for a total of some 14,400 items that must be graded by our GTAs. Our GTAs are students who are enrolled in our Statistics Masterâs and Doctoral programs but who do not necessarily have any prior teaching experience. The GTAs come from diverse backgrounds and, for many, English is not their first language. In short, we had many assignments for untrained graders. A detailed scoring rubric, therefore, must first be developed for each question. The goal of the scoring rubric is to ensure consistency in the grading process for these assignments while providing feedback to the student on their work. The scoring rubric is designed to assess the overall elements of good technical writing as well as presentation of data and written communication of results. Next, faculty members conduct grading training sessions with the GTAs after each DA assignment and its scoring rubric have been developed. During these training sessions, the importance of consistency between sections and graders is emphasized. The GTAs find these training sessions to be helpful in their understanding of what the faculty were specifically looking for in a student response to a question. Finally, to ensure that the GTAs were following the rubric, faculty members randomly selected student papers and back-grade the GTAsâ scores. If there was a discrepancy between the GTA and faculty scores, the faculty member worked with the GTA to determine and clarify the source of the score disagreement before the scores are released to the students.

 

Author Biography

Beth Johnson, George Mason University

Assistant Professor

Published

2018-08-08