Interior_Banner_Sponsored_Projects

Dissemination of a Novel Undergraduate Research Evaluation Model

An NSF WIDER Project (DUE 13-47727)

 

Breaking News:  Applications Now Open for Phase 3 of EvaluateUR

We currently are accepting applications to participate in the 3rd round of implementing EvaluateUR through Buffalo State's and CUR's award from the NSF WIDER program. Information about the project and EvaluateUR can be found at:
 
Informational Webinar: November 15, 12 - 1pm (eastern time)
Join the webinar on November 15 to learn more about EvaluateUR and participating in the upcoming round of pilot implementation. To register for this free webinar, please go to:   https://attendee.gotowebinar.com/register/1506539131208631811
 
Application Deadline: December 7, 2018
Applications received by December 7 will be considered until all available sites are selected for 2019. Applications received after this date will be considered only if all available sites have not been selected by the December 7 deadline.
 
Notification of Acceptance: January 2019
Conditional acceptances will be made by early January 2019. An orientation program will be scheduled for late January and accepted sites will be required to complete an activity that is designed to introduce them to EvaluateUR. To be accepted for round 3 and invited to participate in the face-to-face training session (scheduled for April in Buffalo, NY), the activity must be completed by March 1, 2019. Further details will be provided in the conditional acceptance notification. We anticipate inviting 30 sites to participate in the upcoming round of EvaluateUR pilot implementation. 
 

____________________________________________________________________

Major Goals of the Project

The main goal of this effort is to refine and broadly disseminate a successful undergraduate research (UR) evaluation model that has been in place at SUNY Buffalo State since 2008. The model is based on a theory of how student outcomes could be improved by incorporating evaluation directly into undergraduate research experiences. The model enables an empirically robust student outcomes assessment while employing the data collected for that purpose to support teaching and learning during and after the student research. The project is a collaboration among State University of New York Buffalo State (Buffalo State), the Council on Undergraduate Research (CUR), and the Science Education Resource Center (SERC) at Carleton College, Minnesota.

A guiding principle for this project was the desire to obtain reliable independent assessments of program impact without creating a measurement burden, while at the same time providing information to participating students and their mentors that could help them gain new insights into student academic strengths and weaknesses. To accomplish this, 11 outcome categories were identified, each defined by several components.

Outcomes

As part of repeated conversations between the student and mentor, they both complete identical assessment surveys that address each component of these 11 outcome categories (34 components altogether). The surveys are completed before the research begins, in the middle of the research, and at the end of the research experience. This gives mentors multiple opportunities to review and assess student work and provides time for students to reflect on their strengths and weaknesses. The survey items are scored using a five-point scale to denote that a student always, usually, often, seldom, or never displays the desired outcome for each component. Faculty mentors assess students on each component and students also assess their own progress using the identical instrument. Following this, the mentor and student meet to discuss how each scored the assessment survey and to explore the reasons for any differences in their respective assessments. There is also an option for the student-mentor pair to add additional outcomes and outcome components.  This feature provides each student-mentor pair flexibility to assess discipline-specific outcomes or any other aspect of the research experience they are interested in assessing. A summer research coordinator conducts an orientation session for students and mentors to explain the evaluation goals and methods. A web-based administration page shows the status of each student-mentor pair and helps the administrator track each pair and ensure that surveys and reports are completed in the proper sequence and at the correct time in the research program. The administrator releases forms only when the pair is ready to complete them and there are automated reminders sent to remind the student-mentor pair about completing the form and meeting to discuss how each scored the survey items.

To learn more about the development of the model, additional details of model implementation, and evaluation results at Buffalo State, refer to Singer and Weiler, 2009 and Singer and Zimmerman, 2012

 

Timeline for scaling up the UR evaluation model

In 2016, we migrated web support for the evaluation from the Buffalo State server to a new host site at the Science Education Resource Center (SERC) located at Carleton College in Minnesota. At the same time, we added new features and developed resources to guide evaluation implementation on campuses participating in our pilot efforts in 2017-2019. These changes help open EvaluateUR to many more campuses. Web resources are available to orient new users, there will be options to modify the evaluation to add campus-specific questions, and the site will support the easy generation of reports with a limited number of statistical measures.  

Scaling up the evaluation model involves several phases between 2017 and 2019.

            Phase 1: Limited to 3-5 institutions with summer research programs structured like the program at Buffalo State. This phase allowed us to further validate the Buffalo State UR model by showing that it can be successfully implemented on other campuses with similar summer research programs.

            Phase 2: A group of 10-12 institutions (including those involved in Phase I) with programs that are similar to Buffalo State's program, but with some differences, allowed us to explore how best to meet the needs of a more diverse group of institutions.

            Phase 3: A new group of institutions will be recruited in fall 2018, to participate in the third, and final round of implementing EvaluateUR.  For more information on EvaluateUR, and to apply for the Phase 3 opportunity, please see:  https://serc.carleton.edu/evaluateur/index.html