The main goal of this NSF - WIDER project, led by Dr. Jill Singer at SUNY Buffalo State (hereafter Buffalo State) was to scale up and disseminate an evaluation method, known as EvaluateUR, for measuring student learning and related outcomes for students conducting summer and academic year mentored research. The project was a collaboration among Buffalo State, the Science Education Resource Center (SERC) at Carleton College, and Council on Undergraduate Research (CUR).
EvaluateUR centers on having both faculty mentors and their student researchers assess student knowledge and skills three times over the student’s research project (at the beginning of the research, mid-research, and end-of-research), followed each time by student-mentor conversations to compare and discuss the reasons for their respective assessments. One of the novel features of this approach to evaluation is that it is embedded into the research and mentoring processes, while at the same time generating reliable data that can be used by directors of undergraduate research programs to document their programs’ impacts. More details about the development of the method are provided in Singer and Weiler, 2009, Singer and Zimmerman, 2012, and Singer et al., (in prep).
In EvaluateUR, students and mentors complete identical assessment surveys that include 11 outcome categories, each defined by several measurable student behaviors for a total of 35 outcome components (Table 1). The outcome categories shown in Table 1 are also closely aligned with the wide range of essential workplace competencies identified by the Office of Career, Technical and Adult Education, U.S. Department of Education and by the National Association of Colleges and Employers (www.cte.ed.gov/employabilityskills and www.naceweb.org/career-readiness/competencies/career-readiness-defined.)
The assessments are completed before the student’s research begins, in the middle of the research, and at the end of the research experience. This phased assessment approach gives mentors multiple opportunities to review and assess student work and provides time for students to reflect on their strengths and weaknesses. EvaluateUR components are scored using a five-point scale ranging from “always” to “never” indicating the extent to which a student has displayed the outcome component being assessed. The instrument is first provided to each student-mentor pair at an orientation session that precedes the beginning of student research activities, so that both students and mentors can become familiar with the method.
Beginning with a “baseline assessment” before research begins and followed by two additional assessments (at the mid-point and end-points of the research project) students score themselves on each outcome component, and their research mentors, using the same instrument, score their students. The first assessment is done together so that the student and mentor can discuss how each outcome component relates to the student’s research. At this meeting, they also have the option to add project-specific outcomes to the assessment. The mid-research and end-of-research assessments are done independently. Following each assessment, the student and mentor receive a link that takes them to a score report that shows how they rated each outcome component. Outcome components with a score difference of 2 or more are highlighted to call attention to them. Following each of the assessments, the student and mentor meet to discuss how each scored the outcome components and to explore the reasons for any differences in their respective scores. EvaluateUR stresses that the assessment scores are less important than the conversation that follows the assessments, at which time the student and mentor share their rationales for assigning particular scores and discuss the reasons for differences, if any, in their perceptions. A series of instructional videos and other resources to help EvaluateUR adopters learn how to implement EvaluateUR are available on the website.
The EvaluateUR process is facilitated by undergraduate research program directors, who conduct orientations for students and mentors to explain the EvaluateUR goals and steps. A web-based administratior’s page (a dashboard) shows the status of each student-mentor pair in the program and helps the administrator track the progress of each student-mentor pair to ensure that assessments are completed at the appropriate time in the research program. Automated messages are sent throughout the EvaluateUR process with reminders about completing steps.
Particularly innovative aspects of the EvaluateUR approach include its applicability to all disciplinary areas, support for students, faculty mentors and undergraduate research directors, and a phased approach to assessing student knowledge and skill development throughout the course of UGR experiences. EvaluateUR also includes a web-based statistical package known as EZStats that automatically generates for each outcome component composite descriptive measures for students and mentors. The format of EZStat output makes these measures readily usable for reports. An instructional video and user guide for EZStats can also be found on the EvaluateUR website.
By the end of the five-year WIDER grant, ~50 colleges and universities had implemented EvaluateUR, and hundreds of faculty mentors and UGR directors were provided with information about the EvaluateUR method, through a variety of presentations and webinars at national UGR and STEM meetings. Surveys conducted in 2019 with students, faculty mentors, and undergraduate research directors using EvaluateUR demonstrated that ~ 90% of respondents judged that the EvaluateUR discussions helped students gain better understanding of their academic and professional strengths and weaknesses. All EvaluateUR assessment components saw statistically significant positive student gains for all 35 outcome components and research mentors found it easier to identify the academic strengths and weaknesses of the students they mentored, enabling them to focus their mentoring efforts more productively.
An independent evaluation found that EvaluateUR tested an innovative method for evaluating undergraduate research in a way that could reliably measure specific knowledge and skill outcomes while also contributing directly to student learning. At the conclusion of the WIDER grant, EvaluateUR transitioned to a subscription-based service with general support provided by Buffalo State and technical support provided by SERC.