Interior_Banner_Impact
The first step in assessment is defining what (and who) you are counting as participants in undergraduate research, scholarship, and creative inquiry on your campus and setting a consistent practice for tracking participation
 
In deciding how to track participation, thinking about what you need to know and what you might assess, consider other items to track in addition to participation 
  • Who does research? 
  • How much and/or for how long? 
  • With what intensity or rigor? 
  • Is there funding for undergraduate researchers and their mentors, and, if so, from whom?
  • What is the product or outcome of the undergraduate research? 
  • Where do undergraduate students conduct their research (in-class, out-of-class, or a combination thereof)?
 
See more examples of how other institutions have tracked and assessed undergraduate research in Scholarship and Practice of Undergraduate Research (SPUR) and this literature review: Assessing the Impact of Undergraduate Research Experiences on Students: An Overview of Current Literature
 
 
Creating a Plan
How do you decide on when and how to assess impacts of undergraduate research participation? See some ideas for measuring student development in the spring 2018 Issue of SPUR.
 
  • Pick a program and an outcome you can assess now by using data you have or can easily access.
  • Plan ahead for data collection for the future; for example, how can you track students now so you can measure outcomes for alumni?
 
Consider the following when you create your plan:
  • What resources do you have in terms of personnel, financial investment, and existing data, etc.?
  • What is your timeframe for collecting data and completing the analysis?
  • What relationships and collaborations can you build through this assessment project such as faculty who are interested in publishing on the science of teaching and learning, or members of the Institutional Review Board or Institutional Assessment and Research offices?
  •  What is the purpose and the audience of the assessment, and how do you communicate your findings (e.g., a report to campus leadership or a publishable research project to create generalizable findings)?
 
 
Asking the Right Questions
What are the big questions that need further assessment and research? How can your research and assessment connect to what we already know about UR and add to what we need to know? 
  • See a review of questions that have been reviewed in literature on UR in CUR QuarterlyAssessing the Impact of Undergraduate Research Experiences on Students: An Overview of Current Literature and the report from the National Academies of Sciences, Engineering, and Medicine on the need for more rigorous research on the impact of undergraduate research and particularly for the need for greater attention to questions about equity in undergraduate research participation. 
  • Steps to create a research question:
    1. Outline the learning outcomes, skills, or benefits you hope your students are getting out of their research experience.
    2. Which of these can you measure with data you already have? Create a question you can answer now based on those data. 
    3. Which outcomes can you find ways to measure in the future? Create longer-term questions for those.
  • Get ideas for what to assess and how to do it from the CUR Community in the CUR Member Forum, see past discussions on assessment instruments and strategies, or ask your own question. 
 
Assessment Tips by Data Type 
Assessment of undergraduate research and the data you use in that assessment will depend on what type of research you are doing (e.g., early-stage/ exploratory research or studies of efficacy and effectiveness of established programs). See the IES-NSF Common Guidelines for Educational Research and Development for more information on types of educational research. This section will provide considerations using qualitative and quantitative data.  
 
Surveys
 
Check for previously established instruments. The following are some places to look:
  • Look in the literature in your discipline for measures of content specific knowledge or  concept inventories
  • Review literature in psychology, education, and social sciences for measures of learning and development. Some examples include the following:
 
Creating your own surveys:
  • Only ask for what you’ll actually use to keep surveys short.
  • Ask about undergraduate researcher behavior or content knowledge when possible.
  • Self-reported learning when appropriate (See Pike, G. 2011. “Using College Students’ Self-Reported Learning Outcomes in Scholarly Research.” New Directions for Institutional Research, 150: 41–58.)
 
Self-reported learning includes survey or interview questions where you ask students how much they have learned, gained, or developed in an area (e.g. to what extent did your summer research contribute to your understanding of the research process) or where students rate their own abilities in a pretest or posttest (e.g., please rate your ability to work on a team with other researchers). This type of data is strongly tied to satisfaction and limited by the students’ ability to know what they don’t know. For example, students often rate themselves higher on pretests and lower on tests after research experiences once they have seen how much they have to learn. However, these data can be useful when used in the following ways:
  • Comparisons between treatments (e.g., students who did undergraduate research compared to those who did not).
  • Relationships between items (e.g., the relationship between reporting feeling supported by a faculty mentor and self-reported learning).  
  • When compared to another measure to increase validity (e.g., is self-reported learning related to increases in GPA, retention, graduation rates, or faculty evaluation of learning).
 
Self-reported data are arguably most useful in assessing a program instead of individual student outcomes. As Lopatto (2017) notes, multiple measures in self-reported data can be used to diagnose what is working or not working with a program especially when comparing to other programs or other campuses. This article provides guidance for moving beyond attitudinal questions and how to use self-reported data to explore student decision making, judgment, and communication. 
 
Qualitative Data
This may include interviews, focus groups, student reflections, writing samples, or student products such as research papers or posters.
To analyze these data, you can use rubrics or create a coding structure: 
 
Quantitative and Institutional Data
Your institution already tracks a number of student characteristics and outcomes, including retention in college, retention in major/field of study, GPA, number of units completed, and graduation rates. How can you utilize this resource to assess undergraduate research? 
 
This type of data is particularly useful for comparing students who have done research to those who haven’t. The Institute of Education Sciences’ What Works Clearinghouse provides best practices in utilizing quantitative data in quasi-experimental design. 
​​
Managing Data and Institutional Review Board (IRB) Considerations
Educational assessment of undergraduate research often falls under the category of quality improvement activities and can be done within the limits of typical education activities. This type of assessment may then be exempt from Institutional Review Board requirements. Check with your Institutional Review Board (IRB) about qualifying for exemption.  Here are some tips that generally help:
  • If you don’t need it, don’t collect personal identifiers. If you are not going to match this survey, interview, or focus group to other sources of data in the future, do not collect student names or identifiers so there is no risk of identification.
  • Utilize previously established educational practices, including undergraduate research or course based-research. If your intervention is something new that has never been tested before, you will need IRB oversight. If you are testing the implementation of something that has been previously tested like active learning, a hands-on project, or a project-based learning initiative, your research is more likely to be exempt.
  • Utilize data from traditional educational activities or student products (e.g., research papers or course exams) 
  • If your research is a case study or is primarily to inform quality improvement activities for your program or university, it is more likely to be exempt. 
 
Even if you believe your research will be exempt, check to see if you need IRB approval before you conduct your assessment. Consult with your IRB to determine if your assessment falls under Human Subject Protection or if it is considered a “quality improvement activity.” Additional information can also be found here: https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/index.html
 
Even if you do not need IRB approval, follow ethical research practices and remember that any data that contains information about students is also protected by FERPA.
Impact