Dissemination of a Novel Undergraduate Research Evaluation Model
An NSF WIDER Project (DUE 13-47727)
Major Goals of the Project
The main goal of this effort is to refine and broadly disseminate a successful undergraduate research (UR) evaluation model that has been in place at SUNY Buffalo State since 2008. The model is based on a theory of how student outcomes could be improved by incorporating evaluation directly into undergraduate research experiences. The model enables an empirically robust student outcomes assessment while employing the data collected for that purpose to support teaching and learning during and after the student research. The project is a collaboration among State University of Buffalo State (Buffalo State), the Council on Undergraduate Research (CUR) and the Science Education Resource Center (SERC) at Carleton College, Minnesota.
A guiding principle for this project was the desire to obtain reliable independent assessments of program impact without creating a measurement burden, while at the same time providing information to participating students and their mentors that could help them gain new insights into student academic strengths and weaknesses. To accomplish this, 11 outcome categories were identified, each defined by several components.
As part of repeated conversations between the student and mentor, they both complete identical assessment surveys that address each component of these 11 outcome categories (34 components altogether). The surveys are completed before the research begins, in the middle of the research, and at the end of the research experience. This gives mentors multiple opportunities to review and assess student work and provides time for students to reflect on their strengths and weaknesses. The survey items are scored using a five-point scale to denote that a student always, usually, often, seldom, or never displays the desired outcome for each component. Faculty mentors assess students on each component and students also assess their own progress using the identical instrument. Following this, the mentor and student meet to discuss how each scored the assessment survey and to explore the reasons for any differences in their respective assessments. There is also an option for the student-mentor pair to add additional outcomes and outcome components. This feature provides each student-mentor pair flexibility to assess discipline-specific outcomes or any other aspect of the research experience they are interested in assessing. A summer research coordinator conducts an orientation session for students and mentors to explain the evaluation goals and methods. A web-based administration page shows the status of each student-mentor pair and helps the administrator track each pair and ensure that surveys and reports are completed in the proper sequence and at the correct time in the research program. The administrator releases forms only when the pair is ready to complete them and there are automated reminders sent to remind the student-mentor pair about completing the form and meeting to discuss how each scored the survey items.
To learn more about the development of the model, additional details of model implementation, and evaluation results at Buffalo State, refer to Singer and Weiler, 2009 and Singer and Zimmerman, 2012
Timeline for scaling up the UR evaluation model
As of spring 2016, we are in the process of migrating web support for the evaluation from the Buffalo State server to a new host site at the Science Education Resource Center (SERC) located at Carleton College in Minnesota. At the same time, we are adding new features and developing resources to guide evaluation implementation on campuses participating in our pilot efforts in 2017 and 2018. This change will open the web site to many more campuses. Web resources will be available to orient new users, there will be options to modify the evaluation to add campus-specific questions, and the site will support the easy generation of reports with a limited number of statistical measures.
Scaling up the evaluation model will involve several phases between 2017 and 2019.
Phase I: Limited to 3-5 institutions with summer research programs structured like the program at Buffalo State. This phase will allow us to further validate the Buffalo State UR model by showing that it can be successfully implemented on other campuses with similar summer research programs.
Phase II: A group of 10-12 institutions (including those involved in Phase I) with programs that are similar to Buffalo State's program, but with some differences that will allow us to explore how best to meet the needs of a more diverse group of institutions.
Phase III: To be determined based on formative assessment of Phase II.
Expectations and Support Provided to Pilot Sites
In order to validate the Buffalo State evaluation model, it is essential that participating campuses adhere closely in Phase I to the model’s methodology by following all the steps in the Buffalo State process, including the use of all the surveys with follow-up structured conversations between students and mentors. This will require a coordinator/director to serve as the evaluation administrator. That person will be responsible for tracking the progress of the student-mentor teams, sending reminders as needed to ensure that each team completes the evaluation process at appropriate times over the summer, reviewing reports to ensure completeness, and participating in regular communications with project team members. Participating campuses are expected to obtain IRB* approval prior to starting the evaluation. Campuses that join the project will be provided up to $5000 to cover expenses related to their participation (more details about compensation will be provided after selection of campuses). A 2-day workshop is planned for early 2017 to orient participating evaluation administrators to the evaluation method and other aspects of the project.
* While campus IRB procedures differ, this should be a straightforward process and campuses can use the existing Buffalo State surveys and methods on the IRB application.
- Strong interest in obtaining reliable program evaluation data
- Minimum of 6-week summer undergraduate research program
- Summer research program established for at least 3 years
- Minimum of 6 student-mentor pairs participating in summer research program
- Summer research program coordinator/director with at least 3 year’s experience who will be responsible for tracking program implementation, including completion of evaluation steps following the Buffalo State model
- Summer undergraduate research program of 8-10 weeks
- Coordinator/director of summer research program with more than 3 years experience
- Summer research program established for more than 3 years
- More than 6 student-mentor pairs participating in summer research program
- Program that includes initial orientation for participants
- Program that includes completion of progress and final reports by students and mentors
- Campus support for undergraduate research (for students, mentors, and in the curriculum)
- Prior efforts to evaluate student learning outcomes
We anticipate selecting only a few campuses for Phase I. There will be a series of virtual information/orientation sessions and a workshop at Carleton College, Minnesota, or Buffalo State early in 2017.