Undergraduate Research in Humanities, Arts, and Social Sciences: Helping Students Navigate Uncertainty and Build Community through a Structured Cohort-Based Program

Undergraduate Research in Humanities, Arts, and Social Sciences: Helping Students Navigate Uncertainty and Build Community through a Structured Cohort-Based Program

Toven-Lindsey, Brit, Erin M. Sparck, Kelly Kistner, Jacquelyn Ardam, Whitney Arnold. 2023. Undergraduate Research in Humanities, Arts, and Social Sciences: Helping Students Navigate Uncertainty and Build Community through a Structured Cohort-Based Program. Scholarship and Practice of Undergraduate Research 7 (2): 15-24. https://doi.org/10.18833/spur/7/2/2


Undergraduate research experiences are an important component of an undergraduate education for many students and can have wide-ranging benefits, including gaining new knowledge about the scientific method and research procedures (Craney et al. 2011; Kistner et al. 2021); critical thinking skills (Brownell et al. 2015); persistence in their undergraduate major (Craney et al. 2011); and entry into graduate school (Wilson et al. 2018). These benefits are particularly important for students from minority groups who have often been excluded from these high-impact and immersive learning opportunities (Estrada, Hernandez, and Schultz 2018; Hernandez et al. 2018). Additionally, undergraduate research participation has been shown to support students’ identity development (Palmer et al. 2015; Robnett, Chemers, and Zurbriggen 2015) and sense of belonging on campus (Miller, Williams, and Silberstein 2019).

Yet the literature on benefits of undergraduate research has been primarily focused on students in science, technology, engineering, and math (STEM) fields, leaving a gap in knowledge about research programs focused specifically on the needs of students in humanities, arts, and social sciences (HASS; Craney et al. 2011; Haeger et al. 2020). Whereas undergraduate research in STEM fields is often highly structured and guided by faculty and graduate mentors, students’ independent research experiences in HASS fields can be more variable, with differing amounts of oversight, structure, and training (Craney et al. 2011). Further, there may be limited incentives for faculty across disciplines to dedicate time and energy to mentoring undergraduate researchers (Becker 2020; Webber, Nelson Laird, and BrckaLorenz 2013), indicating a need for institutional resources and support.

In this study the authors examine student learning experiences in the Undergraduate Research Fellows and Scholars Programs (URFP and URSP, hereafter referred to as URP) at the University of California Los Angeles (UCLA), which supports students performing a multiquarter research or creative project each year under the mentorship of a faculty member on campus. Previous research on URP highlights numerous learning and career outcomes. Students report that these programs advance their critical thinking and problem-solving skills, professionalism, and written and oral communication skills (Kistner et al. 2021), and that they help students develop faculty mentorship and peer networks (Arnold et al. forthcoming).

Structured programs like URP create more standardized pathways to research and in turn increase access to research opportunities for diverse groups of students, but further study is needed to examine how students conceptualize their learning and advancement as researchers during their undergraduate years. To build upon previous findings and contribute to the literature on programs for students conducting research in HASS fields, this study aimed to better understand student experiences in URP and how structured research programs help students build confidence in their research skills and connections to the community of scholars in their field.

Literature

Participation in undergraduate research provides students with opportunities to learn about the disciplinary norms and the ways of thinking and practices in a particular field (Barker 2009; Hall et al. 2021; Hunter, Laursen, and Seymour 2007). Students participating in guided research gain confidence, critical thinking and technical skills, and clarification of their future career aspirations (Hunter et al. 2007; Thiry, Laursen, and Hunter 2011); have the chance to see faculty model a particular version of learning and inquiry (Palmer et al. 2015; Palmer et al. 2018); and gain methodological and technical proficiency in their field (Feldman, Divoll, and Rogan-Klyve 2013). Yet, although many outcomes of undergraduate research benefit students across disciplines, there are differences in student goals and needs.

In their study of cross-disciplinary perspectives of undergraduate research, Craney and colleagues (2011) found that students in social sciences and humanities highly valued producing papers and publications as a primary outcome of their research experience, whereas students in STEM fields were more likely to value gains in specific technical skills. Further, students in HASS fields were more likely to pursue a research project that interested them than STEM students, and they also were more likely to be conducting research on their own and not together with their peers (Craney et al., 2011).

These findings point to the unique needs and interests of students conducting research in HASS fields, and the importance of considering the role of academic discipline in designing undergraduate research programs. Further, students can often feel disconnected from the research activities of their university (Palmer et al. 2015) and face barriers to accessing hands-on learning experiences. As a large, public research-intensive institution with a diverse student population, UCLA provides opportunities for many students to engage in cutting-edge research. Yet, accessing undergraduate research experiences can be challenging when one considers the wide range of academic programs and complex network of research activities. Students may lack the research capital (defined as “the economic, social, and cultural capital that influence students’ paths to engaging in undergraduate research”; Cooper, Cala, and Brownell 2021, 4) to navigate these unfamiliar pathways at undergraduate institutions (Ovink and Veazey 2011).

To broaden access and help students navigate the research process, many campuses have developed undergraduate research centers and programs, with the majority focused primarily on (and established to support) STEM students. Well-structured research programs have been shown to support retention of minority students in STEM, in particular, and to help increase a sense of belonging through components such as faculty mentoring and early exposure to research (Carter, Mandell, and Maton 2009; Chang et al. 2014; Sellami et al. 2021). In a study of the McNair Scholars Program focused on the experiences of 13 Black students, Clayton, Breeden, and Davis (2023) found that participation in undergraduate research helped students, most of whom were in HASS majors, build confidence capital through hands-on research and exploring graduate school opportunities, sharing their research at conferences and meetings, and building a network of mentors and peers. Further, faculty mentorship was an important component of students’ growth as scholars and of building confidence to navigate academic spaces and graduate school applications (Clayton et al. 2023). Many of these programs are highly structured and guide small cohorts of students through the process of accessing and participating in undergraduate research. Undergraduate research centers and programs like URP can help broaden participation and combat perceptions about the exclusivity of research by offering more direct and accessible pathways into a wider range of undergraduate research experiences (Haeger et al. 2021).

Methods

Research Setting

UCLA is a highly selective research-intensive university in southern California that enrolls more than 6,000 first-year students and 3,500 transfer students each year. Approximately 30 percent of incoming first-year students and 34 percent of incoming transfer students in fall 2022 were students from minority racial and ethnic groups, and 27 percent and 43 percent of these groups, respectively, were first-generation undergraduate students. UCLA is a research-intensive institution, and many undergraduates participate in guided research experiences. According to data from the UCLA College Senior Survey (UCLA 2023) collected from 2014 to 2022 (N = 50,829), around 30 percent of respondents, regardless of major, assisted faculty with research on a voluntary basis (these experiences may offer students course credit), and 47 percent collaborated with peers on a course-based research project. Although these numbers do not capture the full range of research engagement, it does indicate that research is an important component of UCLA students’ undergraduate experience.

Undergraduate Research Fellows and Scholars Program

The UCLA Undergraduate Research Center for Humanities, Arts, and Social Sciences (URC-HASS) was established in 1998 with a primary mission of promoting, developing, and celebrating undergraduate student research and creative inquiry, while also enhancing undergraduate education and preparing students for careers in all areas. URC-HASS supports students with a wide range of programs and services, including research courses and resources, campus-wide events and programming, and scholarships. One set of programs, the Undergraduate Research Fellows and Scholars Programs, provides financial support for students to do multiquarter research or creative projects under the mentorship of a UCLA faculty member.

Students in both programs enroll in a research contract course for credit with their faculty mentor, receive a scholarship, and present their work during the annual Undergraduate Research Week. Students in the Undergraduate Research Fellows Program additionally participate in a research and professional development course taught by faculty in URC-HASS. Through these and other activities, students receive guidance from URC-HASS graduate research mentors and build community with their peers. The Undergraduate Research Scholars program is reserved for more advanced students (third- and fourth-year students) who are completing a comprehensive independent research project, honors thesis, or capstone. Although these students do not enroll in the research and professional development course, they can still meet with graduate mentors, participate in workshops, and present during the campus research week. Hereafter, data from these programs will be combined, and they will be referred to as the Undergraduate Research Programs (URP).

Data Collection and Analysis

This study draws on data from numerous sources to better understand the experiences of students who participated in URP from 2015 to 2022 (N = 483). To better understand the landscape of undergraduate experiences at UCLA, this study includes participating students’ responses to the Senior Survey (N = 208). Specifically, coded data from open-ended responses to the question: “What was your most meaningful learning experience at UCLA?” were analyzed. Additionally, all students participating in URP were invited to complete a survey at the beginning and end of the program (N = 431 matched responses). Table 1 provides an overview of select student background characteristics for all URP students and those who completed the pre- and post-surveys, including gender, transfer student status, and identification as a member of a minority racial/ethnic group.

All students participating in URP are invited to complete a survey about their experiences at the beginning and end of the program. Although the survey is designed to capture shifts in student perceptions and attitudes about research during the program, students do enter these programs with varying levels of experience with conducting research. Select survey questions were included for analysis, including those focused on students’ individual research projects, the research community at UCLA, and specific components of URP, as well as open-ended responses to the question: “What has been most helpful about [URP]?” (N = 403; see Table 2). Finally, students in the 2021–2022 cohorts of URP submitted reflection memos (N = 54) focused on their experiences in the program and conducting their research projects.

Program surveys were analyzed using descriptive statistics and paired samples t tests of significance in SPSS. Students’ open-ended responses and reflection memos were analyzed using Dedoose qualitative coding software. Thematic analysis of reflection memos and open-ended questions was guided by existing literature on outcomes of undergraduate research, including skill development, faculty mentorship, barriers to engagement, and identity development (e.g., Craney et al. 2011; Kistner et al. 2021; Palmer et al. 2015). Researchers began coding with these a priori themes, and student comments also produced emergent themes, such as a sense of ownership and agency related to their research projects (Maxwell 2013; Saldaña 2013). Codes were developed and tested by multiple researchers in an iterative process across multiple terms for open-ended responses to establish a final codebook.

Findings

Structure, Peer Networks, and Support for Research

Paired samples t tests for pre- and post-survey responses indicated that students received needed support and resources in URP to successfully conduct their research projects (see Table 3). Specifically, the majority of students reported that the resources and tools provided by URP helped them with their research project (mean = 4.28, SD = 0.78 at post; p < .001) and that they learned new research skills as part of the program (mean = 4.34, SD = 0.81 at post; p < .001). Students also reported that participating in URP contributed to their interest in research (mean = 4.25, SD = 0.92 at post; p < .001). Additional mean score comparisons by gender, race/ethnicity, and transfer status did not result in significant differences between groups.

Students’ open-ended responses about those features of URP that were most helpful further illuminated these findings (N = 403; see Table 4). The most prominent themes about helpful aspects of the program included: (1) the structure, guidance, and accountability of URP (173 coded excerpts); (2) program funding and course credit give students the flexibility to focus on their research project (109 coded excerpts); (3) support and community from peers (93 coded excerpts); (4) mentorship and guidance from graduate mentors and program staff (75 coded excerpts); and (5) the positive environment and supportive culture of the program in general (69 coded excerpts).

Regarding the overall structure and accountability that the program offered, one student commented, “[URP] has given me a clear direction as to how I can initiate and execute a research project.” Another said, “The most helpful aspect has been the structure and encouragement—I don’t think I would have initiated this student project without the program’s existence.” URP offered students different options for formal programming and structure to meet their needs, from weekly class meetings that introduced students to best practices and campus resources for research to drop-in workshops and mentor check-ins for more advanced students, and these findings indicated that all students benefited from the ongoing support and accountability built into these programs.

Program funding was also crucial for many students. As one student put it, “As a first-generation low-income student I have always had to work … [URP] gave me the opportunity to focus on research and decrease[d] financial stressors.” The structure and stipend of URP helped students gain new skills, stay on track, and focus their energy in ways that pushed their research projects forward.

Scholarly Community

Building connections and a sense of belonging in the research community is an important outcome of undergraduate research experiences (e.g., Museus, Yi, and Saelua 2017; Palmer et al. 2015). Analysis of responses to program surveys at the beginning and end of URP indicated that students made significant gains in their identity as members of the research community at UCLA during the program. As Table 5 indicates, students reported significant gains in (1) their feelings about being a valued member of the UCLA research community; (2) developing a sense of community with faculty and peers; and (3) presenting at academic conferences and/or publishing in an academic journal, all as a result of their participation in URC. Additionally, students indicated that one of the reasons they chose to conduct research in URP was because they saw their peers engaged in research (pre-survey mean = 2.994, post-survey mean = 3.186; p < .001).

Two of the major themes that emerged from the analysis of reflection memos (N = 54) were students’ (1) sense of agency and ownership of their research projects (N = 33 participants); and (2) sense of belonging in the community of scholars at UCLA (N = 29 participants; see Table 6). More than 20 percent of students (N = 12) addressed both of these themes in their reflections, indicating that a strong sense of ownership of their research project also may have contributed to their sense of belonging in the research community. As one transfer student majoring in art said, “What feels most rewarding is having the support and encouragement to ask questions that deeply matter to me . . . the kind of total autonomy to choose a topic and my level of personal investment is creating a different kind of experience.” Another student majoring in psychology stated, “[URP] has also allowed me to gain a better understanding of my place at UCLA as a researcher … Being a transfer student, I started to have imposter syndrome and was worried about how I would fit in at UCLA … Being a part of [URP] showed me that I belong here and faculty members see my worth.” As these comments illustrate, having the opportunity to design and conduct their own projects helped students not only develop technical skills, but also confidence in their abilities and identity as a researcher.

Academic Journey

Nearly all students who participated in URP completed the UCLA College Senior Survey, and 208 commented on their most meaningful learning experiences. Nearly 40 percent of students talked about undergraduate research as one of these important learning experiences (N = 81). About their experience in URP, one student commented, “I am able to learn research skills and gain opportunities to get in touch with the research community, which motivates me to continue doing research in the future and to pursue a higher degree in psychology.” Another said, “My most meaningful experience was meeting my current graduate mentor and having the opportunity to participate in research programs to present my research at various conferences. It was something I never imagined I would do.” As these students indicated, being involved in research as an undergraduate offered students a chance to gain new skills and knowledge, and to build confidence in their abilities.

Responses to the URP program survey indicated that students felt a sense of ownership and connection to their research projects. In the post-survey, 95.6 percent of students agreed or strongly agreed that their research project built on their academic interests (N = 424); 91.2 percent indicated that they had initiated and designed their project (N = 416); and 93.7 percent said that their project exposed them to new areas of intellectual curiosity (N = 420).

Analysis of student reflection memos indicated that participating in URP helped them build skills that were relevant not only to their research projects but to their academic journey more broadly. Specifically, two major themes related to students’ skill development included: (1) gains in specific research skills such as data analysis and reviewing literature and sources (N = 54 coded excerpts); and (2) time management (N = 43 coded excerpts). With regard to gaining new research skills, one student who was majoring in human biology and society commented, “I had a very cookie-cutter understanding of research as this structured way of knowing . . . I now understand that conducting my own research is a two-way street that allows me to reverse and revisit my thoughts to strengthen my work.” Another transfer student majoring in psychology reported, “Thanks to this class I was able to make informed adjustments to my research and improve upon the method.” Students also talked about the ways that being part of URP helped them improve their time management skills. Developing these skills helped them successfully complete their research projects, and students also described their relevance for other courses and graduate school and career planning.

Discussion

Our findings indicate that, by offering a wide range of programs and services that aim to meet students’ needs at different stages in their academic journey, URC-HASS is helping students build confidence in their abilities as researchers and gain a greater sense of connection to the community of scholars at UCLA. Feeling connected to their academic discipline and institution has been shown to support student success and engagement (Museus et al. 2017). Further, URP has been shown to help students develop confidence in their skills as researchers, including critical thinking and problem solving, professionalism, and communication (Kistner et al. 2021).

Both survey and reflection data indicate that having a sense of ownership and agency related to their research project was an important component of the URP experience for many students, which aligns with the structure of undergraduate research in HASS fields in which students initiate independent projects and seek out a faculty mentor. These findings are similar to those of previous research on supporting student motivation and engagement in academic tasks associated with mastery goals, individual choice, and connecting to students’ interests (e.g., Cavagnetto et al. 2020; Crowe and Boe 2019; Trevino and DeFreitas 2014). They also point to differences between undergraduate research experiences in HASS and STEM fields that warrant further exploration.

Reflection data also indicates that the sense of ownership and the ability to complete their own research study influenced students’ shifting attitudes about their membership in the research community at UCLA. Even though students faced challenges with data collection and analysis and the solitary nature of conducting an independent research project, URP facilitated consistent mentorship from staff in URC-HASS and encouraged greater communication with their faculty adviser. Building relationships with faculty mentors has been shown to positively influence students’ feelings of acceptance and belonging on campus (Miller et al. 2019). These mentor relationships help students gain knowledge about research procedures and norms in their field and as well help them build confidence in their research skills and competencies (Davis and Jones 2020; Hunter et al. 2007).

Students conducting research in HASS fields benefited from the structure, accountability, mentorship, and community offered by URP. For students who are often engaged in more independent research projects of their own design, URP offers needed training in developing skills and strategies for time management and project planning, communicating with mentors, and finding campus resources, all of which help students successfully complete their projects. Further research is needed to better understand the unique experiences of students from different racial groups, transfer students, first-generation undergraduate students, and students across disciplinary groups.

Students also built strong relationships with mentors and peers through regular interactions and meetings, and by sharing their projects during Undergraduate Research Week. These findings are in line with previous research indicating that cohort-based research programs support students in connecting with faculty and peers and build confidence for pursuing research and graduate school (Clayton et al. 2023; Eagan et al. 2013). Nevertheless, more research is needed to understand the relationship between students and their faculty mentors, and how the structure and training offered by URP helps support faculty mentors in HASS fields. With limited incentives and support for faculty mentors and greater variability in project methods and students’ preparation to conduct research in these fields, research programs like URP serve an important role in guiding students through the research process and helping them cultivate strong working relationships with their faculty mentors (Davis et al. 2020).

Finally, the findings indicated that by offering outreach, training, and funding for research, URC-HASS is broadening participation in undergraduate research. Comments from transfer students, in particular, who are less likely to participate in undergraduate research (Chamely-Wiik et al. 2021), about their experience getting connected to URP and conducting an independent study highlight the benefits of structured research programs and funding that can be accessed by all students.

Implications

Through ongoing assessment efforts, leadership in URC-HASS have gained insights about student experiences with the wide range of research programs offered by the center and have made efforts to continually improve the URP for all students. Best practices for designing undergraduate research programs that provide structure beyond the faculty-student mentorship dyad might include: (1) offering skills-based workshops on topics such as communicating with your mentor, collecting sources, time management, giving oral presentations, and applying to graduate school; (2) research-related assignments such as annotated bibliographies and abstracts with regular due dates; and (3) creating opportunities to present work in progress (e.g., colloquiums, workshops) as well as students’ final research projects (e.g., Undergraduate Research Week). Importantly, using a cohort-based model, as URP does, helps to encourage peer support and networking opportunities, as do mentorship and individualized support from staff and graduate students leading the program. Finally, offering funding for research programs in which students will be doing original research helps to broaden access, particularly for lower-income and first-generation undergraduate students, and offers students greater flexibility and time to focus on their projects.

Conclusion

Investing resources in programs such as those offered by URC-HASS is a worthwhile pursuit that can help create consistency in mentorship, preparation, and accountability for students and streamline the training of research skills that support faculty mentors. Tailoring these efforts to the needs of HASS students helps them build confidence, a sense of ownership, and a connection to the research community, creating a richer undergraduate experience.

Institutional Review Board

All research protocols involving human subjects were reviewed and approved by the Institutional Review Board at the University of California Los Angeles (#18-001292).

Conflict of Interest

The authors declare that they have no conflicts of interest.

Data Availability

The data underlying this study are not publicly available due to the procedures for data collection approved in the UCLA IRB protocol for this study.

References

Arnold, Whitney, Kelly Kistner, Erin M. Sparck, and Marc Levis-Fitzgerald. Forthcoming. “Academic Growth and Professional Development through Undergraduate Humanities Research.” Profession.

Barker, Lecia. 2009. “Student and Faculty Perceptions of Undergraduate Research Experiences in Computing.” ACM Transactions on Computing Education 9(1): 1–28.

Becker, Megan. 2020. “Importing the Laboratory Model to the Social Sciences: Prospects for Improving Mentoring of Undergraduate Researchers.” Journal of Political Science Education 16: 212–224.

Brownell, Sara E., Daria S. Hekmat-Scafe, Veena Singla, Patricia Chandler Seawell, Jamie F. Conklin Imam, Sarah L. Eddy, Tim Stearns, and Martha S. Cyert. 2015. “A High-Enrollment Course-Based Undergraduate Research Experience Improves Student Conceptions of Scientific Thinking and Ability to Interpret Data.” CBE–Life Sciences Education 14(2): ar 21.

Carter, Frances D., Marvin Mandell, and Kenneth I. Maton. 2009. “The Influence of On-Campus, Academic Year Undergraduate Research on STEM PhD Outcomes: Evidence from the Meyerhoff Scholarship Program.” Educational Evaluation and Policy Analysis 31: 441–462.

Chamely-Wiik, Donna, Evelyn Frazier, Daniel Meeroff, Jordan Merritt, Jodiene Johnson, William R. Kwochka, Alison I. Morrison-Shetlar, Michael Aldarondo-Jeffries, and Kimberly R. Schneider. 2021. “Undergraduate Research Communities for Transfer Students: A Retention Model Based on Factors That Most Influence Student Success.” Journal of the Scholarship of Teaching and Learning 21(1).

Chang, Mitchell J., Jessica Sharkness, Sylvia Hurtado, and Christopher B. Newman. 2014. “What Matters in College for Retaining Aspiring Scientists and Engineers from Underrepresented Racial Groups.” Journal of Research in Science Teaching 51: 555–580.

Clayton, Ashley B., Roshaunda L. Breeden, and Tiffany J. Davis. 2023. “‘My Entire Support System for Graduate School’: Black Students’ Experiences in a McNair Scholars Program.” Journal of College Student Retention: Research, Theory & Practice. doi: 10.1177/15210251231182683

Cooper, Katelyn M., Jacqueline M. Cala, and Sara E. Brownell. 2021. “Cultural Capital in Undergraduate Research: An Exploration of How Biology Students Operationalize Knowledge to Access Research Experiences at a Large, Public Research–Intensive Institution.” International Journal of STEM Education 8: 1–17.

Craney, Chris, Tara McKay, April Mazzeo, Janet Morris, Cheryl Prigodich, and Robert De Groot. 2011. “Cross-Discipline Perceptions of the Undergraduate Research Experience.” Journal of Higher Education 82: 92–113.

Crowe, Jessica, and Austin Boe. 2019. “Integrating Undergraduate Research into Social Science Curriculum: Benefits and Challenges of Two Models.” Education Sciences 9: 296.

Davis, Shannon N., Pamela W. Garner, Rebecca M. Jones, and Duhita Mahatmya. 2020. “The Role of Perceived Support and Local Culture in Undergraduate Research Mentoring by Underrepresented Minority Faculty Members: Findings from a Multi-Institutional Research Collaboration.” Mentoring & Tutoring: Partnership in Learning 28: 176–188.

Davis, Shannon N., and Rebecca M. Jones. 2020. “The Genesis, Evolution, and Influence of Undergraduate Research Mentoring Relationships.” International Journal for the Scholarship of Teaching and Learning 14(1): ar 6.

Eagan, M. Kevin Jr, Sylvia Hurtado, Mitchell J. Chang, Gina A. Garcia, Felisha A. Herrera, and Juan C. Garibay. 2013. “Making a Difference in Science Education: The Impact of Undergraduate Research Programs.” American Educational Research Journal 50: 683–713.

Estrada, Mica, Paul R. Hernandez, and P. Wesley Schultz. 2018. “A Longitudinal Study of How Quality Mentorship and Research Experience Integrate Underrepresented Minorities into STEM Careers.” CBE–Life Sciences Education 17(1): ar 9.

Feldman, Allan, Kent A. Divoll, and Allyson Rogan-Klyve. 2013. “Becoming Researchers: The Participation of Undergraduate and Graduate Students in Scientific Research Groups.” Science Education 97: 218–243.

Haeger, Heather, John E. Banks, Camille Smith, and Monique Armstrong-Land. 2020. “What We Know and What We Need to Know about Undergraduate Research.” Scholarship and Practice of Undergraduate Research 3(4): 62–69.

Haeger, Heather, Corin White, Shantel Martinez, and Selena Velasquez. 2021. “Creating More Inclusive Research Environments for Undergraduates.” Journal of the Scholarship of Teaching and Learning 21(1).

Hall, Eric, Elizabeth Bailey, Simon Higgins, Caroline Ketcham, Svetlana Nepocatych, and Matthew Wittstein. 2021. “Application of the Salient Practices Framework for Undergraduate Research Mentoring in Virtual Environments.” Journal of Microbiology and Biology Education 22(1): 22.1.92. doi: 10.1128/jmbe.v22i1.2287

Hernandez, Paul R., Anna Woodcock, Mica Estrada, and P. Wesley Schultz. 2018. “Undergraduate Research Experiences Broaden Diversity in the Scientific Workforce.” BioScience 68: 204–211.

Hunter, Anne-Barrie, Sandra L. Laursen, and Elaine Seymour. 2007. “Becoming a Scientist: The Role of Undergraduate Research in Students’ Cognitive, Personal, and Professional Development.” Science Education 91: 36–74.

Kistner, Kelly, Erin M. Sparck, Amy Liu, Hannah Whang Sayson, Marc Levis-Fitzgerald, and Whitney Arnold. 2021. “Academic and Professional Preparedness: Outcomes of Undergraduate Research in the Humanities, Arts, and Social Sciences.” Scholarship and Practice of Undergraduate Research 4(4): 3–9.

Kistner, Kelly, Erin Sparck, Brit Toven-Lindsey, Marc Levis-Fitzgerald, and Whitney Arnold. 2023. “Undergraduate Research in the Humanities, Arts, and Social Sciences: Alumni Personal and Professional Outcomes.” Poster presentation at 2023 AAC&U Conference on General Education, Pedagogy, and Assessment, New Orleans, LA.

Maxwell, Joseph A. 2013. Qualitative Research Design: An Interactive Approach. 3rd ed. Thousand Oaks, CA: Sage.

Miller, Angie L., Latosha M. Williams, and Samantha M. Silberstein. 2019. “Found My Place: The Importance of Faculty Relationships for Seniors’ Sense of Belonging.” Higher Education Research & Development 38: 594–608.

Museus, Samuel D., Varaxy Yi, and Natasha Saelua. 2017. “The Impact of Culturally Engaging Campus Environments on Sense of Belonging.” Review of Higher Education 40: 187–215.

Ovink, Sarah M., and Brian D. Veazey. 2011. “More Than ‘Getting Us Through’: A Case Study in Cultural Capital Enrichment of Underrepresented Minority Undergraduates.” Research in Higher Education 52: 370–394.

Palmer, Ruth J., Andrea N. Hunt, Michael Neal, and Brad Wuetherick. 2015. “Mentoring, Undergraduate Research, and Identity Development: A Conceptual Review and Research Agenda.” Mentoring & Tutoring: Partnership in Learning 23: 411–426.

Palmer, Ruth J., Andrea N. Hunt, Michael R. Neal, and Brad Wuetherick. 2018. “The Influence of Mentored Undergraduate Research on Students’ Identity Development.” Scholarship and Practice of Undergraduate Research 2(2): 4–14.

Robnett, Rachael D., Martin M. Chemers, and Eileen L. Zurbriggen. 2015. “Longitudinal Associations among Undergraduates’ Research Experience, Self-Efficacy, and Identity.” Journal of Research in Science Teaching 52: 847–867.

Saldaña, J. 2013. The Coding Manual for Qualitative Researchers. 2nd ed. Thousand Oaks, CA: Sage.

Sellami, Nadia, Brit Toven-Lindsey, Marc Levis-Fitzgerald, Paul H. Barber, and Tama Hasson. 2021. “A Unique and Scalable Model for Increasing Research Engagement, STEM Persistence, and Entry into Doctoral Programs.” CBE–Life Sciences Education 20(1): ar 11.

Thiry, Heather, Sandra L. Laursen, and Anne-Barrie Hunter. 2011. “What Experiences Help Students Become Scientists? A Comparative Study of Research and Other Sources of Personal and Professional Gains for STEM Undergraduates. Journal of Higher Education 82: 357–388.

Trevino, Naomi Noel, and Stacie Craft DeFreitas. 2014. “The Relationship between Intrinsic Motivation and Academic Achievement for First Generation Latino College Students.” Social Psychology of Education 17: 293–306.

University of California Los Angeles (UCLA). 2023. “The UCLA College of Letters and Science Reflecting on the UCLA Experience.” UCLA College Senior Survey. https://www.college.ucla.edu/seniorsurvey

Webber, Karen L., Thomas F. Nelson Laird, and Allison M. BrckaLorenz. 2013. “Student and Faculty Member Engagement in Undergraduate Research.” Research in Higher Education 54: 227–249.

Wilson, Alan E., Jenna L. Pollock, Ian Billick, Carmen Domingo, Edna G. Fernandez-Figueroa, Eric S. Nagy, Todd D. Steury, and Adam Summers. 2018. “Assessing Science Training Programs: Structured Undergraduate Research Programs Make a Difference.” BioScience 68: 529–534.

Brit Toven-Lindsey
University of California Los Angeles,
btovenlindsey@teaching.ucla.edu

Brit Toven-Lindsey is a postdoctoral scholar at the Center for Educational Assessment at UCLA. She earned her PhD in education from the UCLA School of Education and Information Studies. Toven-Lindsey’s research interests focus on giving voice to the diverse experiences of learners and educators, and the ways that more inclusive and equitable pedagogical approaches, campus policies, and learning environments can support student achievement, learning, and persistence.

Erin Sparck is a postdoctoral scholar at the Center for Educational Assessment at UCLA. Sparck received her PhD in psychology from UCLA. Her research interests focus on the application of cognitive psychology to educational practice, including how to improve learning through effective testing and how to improve learners’ metacognitive awareness of effective study.

Kelly Kistner is the assistant director of the Undergraduate Research Center for Humanities, Arts, and Social Sciences at UCLA. She earned her PhD in sociology from the University of Washington in Seattle. Her research and publications have centered on the history and sociology of knowledge production. Kistner oversees the Research Revealed Program and Aleph Undergraduate Research Journal. She also is involved in the center’s planning, outreach, and assessment activities.

Jacquelyn Ardam is the director of the Undergraduate Research Center for Humanities, Arts, and Social Sciences at UCLA. She holds a PhD in English from UCLA and is the author of Avidly Reads Poetry (NYU Press, 2022). Her writing on literature, art, pedagogy, and culture has been published in a number of academic and public venues. At UCLA, Ardam is particularly interested in increasing the accessibility of undergraduate research experiences and creating entry-level programs for new researchers across the humanities, arts, and social sciences.

Marc Levis-Fitzgerald heads the UCLA Center for Educational Assessment, staffed by researchers with backgrounds in education, sociology, psychology, and chemistry. He received a PhD in higher education at UCLA. Levis-Fitzgerald’s research interests include curriculum reform and evaluation, student and faculty development and institutional transformation. He is particularly interested in documenting the experiences of undergraduate and graduate students and directs the development and implementation of the UCLA Senior Survey.

Whitney Arnold is an assistant professor of comparative literature and medicine at UCLA. She served as the director of the Undergraduate Research Center for Humanities, Arts, and Social Sciences at UCLA from 2013 to 2022 Arnold’s research and publications focus on self-narratives and autobiographical texts, narratives of health and illness, theories of self-representation, and literary history.

Quantitative Methods in the Assessment of Undergraduate Research, Scholarship, and Creative Inquiry

Quantitative Methods in the Assessment of Undergraduate Research, Scholarship, and Creative Inquiry

Lopatto, David. 2023. Quantitative Methods in the Assessment of Undergraduate Research, Scholarship, and Creative Inquiry. Scholarship and Practice of Undergraduate Research 7 (2): 5-12. https://doi.org/10.18833/spur/7/2/3


SPUR represents the philosophy of the Council on Undergraduate Research (CUR) to support and promote high-quality mentored undergraduate research, scholarship, and creative inquiry. Readers of SPUR approach the journal hoping to find model programs, good ideas, and the characteristics or causes of successful undergraduate research programs. The dual goals of SPUR, according to LaPlant (2017), are to “stimulate the rigorous assessment of undergraduate research initiatives and programs” and “that SPUR will encourage best practices and models of undergraduate research” (3) Consideration of these goals leads to two views of the use of quantitative methods. Whereas rigorous assessment evokes ideas concerning statistical comparison and control of confounding variables to clarify a theory of undergraduate research, scholarship, and creative inquiry (URSCI) and its effects on student behavior, finding model programs suggests that programs may be emulated across institutions by educators who are free to change support factors to facilitate the program’s success. In the first case the goal is generalizability; in the second the goal is transferability.

The focus of this commentary is on the use of quantitative research methods in the understanding and assessment of undergraduate research, scholarship, and creative inquiry. The Council on Undergraduate Research has a broad definition of undergraduate research (a mentored investigation or creative inquiry conducted by undergraduates that seeks to make a scholarly or artistic contribution to knowledge), meant to include all scholarly disciplines and interdisciplines. It is necessary to acknowledge that disciplinary pluralism implies epistemological pluralism. The assessment of some undergraduate research programs may rely on the qualitative methods suitable for the nature of the program. For example, Naepi and Airini (2019) described the Knowledge Makers program used to mentor Indigenous researchers and evaluated the impact of the program through e-portfolios and student reflections without the use of quantitative data. Zhen (2020) described a program for teaching chefs to be researchers and presented a summary of successful projects as well as a sample of visual evidence (a photograph) in support of the program’s effectiveness. The observations regarding quantitative methods that follow are not intended to privilege quantitative methods over other epistemologies.

Finding model (exemplary) programs suggests that the authors of many of this journal’s reports are advocating causes (the URSCI mission) as well as investigating causes. CUR’s mission attracts the interest of teacher-scholars whose strategic aim is not one of complete disinterest or impartiality. They want the promise of undergraduate research to succeed. SPUR reports are often composed by mentors, instructors, and program directors who have a stake in the success of their program and the broader mission of CUR. The challenge is to practice impartiality when analyzing undergraduate research programs, and it is in this endeavor that quantitative methods may help. Quantitative methodology comprises conventions for best practices that enhance credibility, such as rules applying to the size and scope of an adequate sample and decision rules for what constitutes “statistical significance.” The promise of quantitative methods is that they permit tactics for evaluation that are objective employed by teacher-scholars who have objectives.

Although textbooks about quantitative methods suggest that a research plan should precede the selection of appropriate measures, it appears that educational researchers rely on available measures of program effectiveness such as student grade point average (GPA), graduation rates, or completion rates. The advantages and disadvantages of these measures are familiar. Institutional measures such as the GPA are routinely collected, and archives are readily available. Although most researchers recognize that grade point averages comprise a heterogeneous mix of course selection, degree of difficulty, and other confounds, the archive continues to be employed in assessment and evaluation (Brown et al. 2020; Nicols-Grinenko et al. 2017; Sell, Naginey, and Stanton 2018). In this selection of a measure, familiarity breeds contentment. Reliance on one imperfect measure is a risky method; however, there is a remedy. The methodology of multioperationalism (Cook and Campbell 1979; Webb et al. 1981) suggests that multiple measures may align to support the argument for the benefits of an URSCI program. Therefore analyzing the GPA and another measure such as student survey responses may strengthen the argument for program effectiveness.

Research Questions

The dual targets of CUR’s mission suggest dual research goals. Consistent with the rigorous assessment of undergraduate research initiatives, Haeger et al. (2020) suggest that the key research question for the study of URSCI is to explicate the causal relationship between URSCI and the various outcomes that have been attributed to the experience (e.g., Lopatto 2004). Haeger et al. observe that quantifying the effects of URSCI has been a challenge, writing that “the majority of research measuring the impact of undergraduate research relies on indirect measures or correlations between outcomes and participation” (67) SPUR also promotes the sharing of models of undergraduate research, inviting the transfer of a model program to new settings even though the underlying causal model is not known. Causal models and model programs are not the same and may afford different sorts of quantitative analysis.

Simplicity

When deploying quantitative methods for purposes of describing or evaluating URSCI, it is tempting to bring to bear the full persuasive impact of sophisticated methods used to uncover latent variables, account for more multiple factors and their interactions, and seek an elusive generalizability of the pedagogy’s effects. There is value in choosing a more modest approach. Experienced researchers caution us that “less is more.” Cohen (1990), writing about psychology, advocated simplicity in research designs, citing the problems that accompany complexity, including poor statistical power (the probability of finding an effect if it exists) and the increase in misleading conclusions of statistical significance when the number of tests increases. Kass et al. (2016) included “keep it simple” in their advice regarding effective statistical practice in computational biology. They wrote, “the principle of parsimony can be a trusted guide: start with simple approaches and only add complexity as needed, and then only add as little as seems essential” (4) Abelson (1995) wrote, “Data analysis should not be pointlessly formal. It should make an interesting claim . . . and do so by intelligent interpretation of appropriate evidence from empirical measurements or observations.” In support of interesting claims authors often use familiar quantitative methods even if the disciplinary focus of the undergraduate research program is complex. For example, the SPUR issue for summer 2019 highlighted programs that featured undergraduate research experiences using big data (large databases and data visualization). None of the featured programs employed big data techniques to evaluate the program’s outcomes. Some reports favored descriptive statistics (Killion, Page, and Yu 2019; Lukes et al. 2019). Others favored descriptions of the program’s development or evolution without quantitative evaluation (Nelson, Yusef, and Cooper 2019). It may be that even as URSCI programs grow to embrace contemporary topics such as machine learning, digital humanities, and artificial intelligence the quantitative methods by which the programs are assessed remain relatively simple. Simple analyses include the t test, which was intended for samples smaller than 30 when comparing a treatment group to a comparison group; as well there is a version for pretest-posttest comparisons. Some reports (e.g., McLaughlin, Patel, and Slee 2020) employ nonparametric statistics that do not demand normally distributed data. Faced with small samples of less than 30, some reports acknowledge the difficulty of inferential comparisons and report only descriptive statistics (Dillon 2020; Spronken-Smith et al. 2018). For these small groups visual representation of data is helpful. If a cohort of students engaging in program is very small, then it may be important to note the reasons for any student who fails to benefit or who drops out of the program. These reasons may be exogenous to the program, such as illness, family crises, etc., and so may not influence the argument for the program’s effectiveness.

Description

Quantitative data are the most common form of reporting the results of programs described in SPUR and other journals; however, the reliance on data does not compel inferential testing or model building. Instead, numerical data can be used as “mere description” (Gerring 2012), providing a more precise account of outcomes than qualitative summaries. If a study reports that student program participants average grades of 3.7, most readers know implicitly that the common GPA scale ranges from 0 to 4, and that 3.7 is a successful grade. Deming (1953) distinguished between surveys that were enumerative (asking how much) and surveys that were analytic (asking why). Enumerative surveys may be adequate for evaluating the effectiveness of a program by reporting graduation rates or attrition rates, vouching for the success of the program but falling short of specifying the specific cause of the success (Cartwright 2007). In some studies mere description is adequate for illustrating an effect. For example, Grindle et al. (2021) used descriptive counts and percentages to illustrate the result of in a study of passive research involvement.

Cohen (1990) noted that simplicity of describing data suggests the use of graphs and diagrams that may aid in presenting a program outcome. The use of figures may efficiently represent descriptive data, and is a common practice in this journal (Barney 2017; Brooks et al. 2019; Garrett et al. 2021; Gold, Atkins, and McNeal 2021; Kuan and Sedlacek 2022; Szecsi et al. 2019). Tufte (1983) outlined the characteristics of graphical excellence, including graphs that serve the clear purpose of description, which encourage the viewer to think about substance, and encourage comparisons between different pieces of data.

Validity

Readers expect the assessment of an URSCI program to be valid. There are many adjectives to be placed before validity, and inevitably three types occur. The first is the validity of the instruments employed to measure outcomes. The second has to do with the internal logic of the program and how that program produces results (internal validity). Finally, the question of the generalizability or transferability of the program arises (external validity).

We expect the creators of instruments to present some evidence of the instrument’s validity by showing that the instrument is in agreement with other methods used to measure the same outcome (Campbell and Fiske 1959). Once the instrument’s trustworthiness is established, the use of the instrument by subsequent researchers often relies on the reputation of the instrument’s original validation. There is often not enough data or time to revisit techniques for validation of the instrument in every study. This trust in the instrument is normal; work proceeds slowly if the research instrument has to be revalidated each time. The concern arises when the new users of the instrument invoke a “mutatis mutandis” approach, that is, making necessary changes in the original instrument so that it fits the new project without affecting the main constructs measured by the instrument. The presumption is that the original instrument is robust, preserving its validity despite alterations. A perusal of the reports published in SPUR suggests that authors often use research instruments created by other researchers. Examples include the SURE survey (Survey of Undergraduate Research Experiences; Lopatto 2004); URSSA (Undergraduate Research Student Self-Assessment; Hunter et al. 2009); and the OSCAR Student Survey (Foster and Usher 2018). Items from these established surveys are occasionally revised to suit the context. Are there credible procedures for changing an instrument while claiming that it retains its essential meaning? The credibility of the instrument can be supported by response process validity, which involves the review of the survey items by subject matter experts, and cognitive interviewing of potential respondents to determine if respondents understood the intended meaning of the survey items. These procedures may or may not lend themselves to quantitative analysis, but they improve the validity of the modified instrument.

The effectiveness of the program, called internal validity, is “the degree to which an experiment is methodologically sound and confound-free” (Goodwin and Goodwin 2017, 148). The validity question reduces to the confidence we have that the URSCI program causes the changes in the students’ behaviors. Traditionally, the gold standard for causal assertions is the true experiment, or randomized controlled trial. Randomized controlled trials are rare in studies of undergraduate research and creativity. Randomized controlled trials rely on the researcher’s control of participant assignment to treatment and comparison groups as the basis for making a causal assertion that the program caused changes in the participant’s behavior. In the absence of randomized controlled trials design features for a causal assertion, researchers use a variety of tactics. Some involve the creation of a nonequivalent comparison group that serves as a proxy for a genuine control group. Nicols-Grinenko et al. (2017) utilized their institution’s undergraduate population as a comparison group for students who participated in undergraduate research. After describing an initiative to build a culture of undergraduate research at their institution, they tracked undergraduate research participants and compared the participants graduation rates and grade point averages to all undergraduate contemporaries. They found higher graduation rates and grade point averages for undergraduate research participants compared to the general student population. Several researchers use a pretest as the comparison group for posttest data. Beer et al. (2019) used both between-groups and pretest-posttest data to argue for the effectiveness of a peer research consultant program. The results showed increments in desirable skills from pretest to posttest based on t tests. Ashcroft et al. (2021) employed pre- and post-ratings of gains in the understanding of research and related items and found several significant Wilcoxon test results in the favorable direction. Tian et al. (2022) reported on the success of inquiry-based learning in China. They found significant gains on self-report items from the SURE survey (Lopatto 2004), although the choice of inferential test was unclear. Several of these reports chose to analyze items on a survey separately, leading to the concern that piecemeal testing may result in false positives (type 1 errors).

Matching and pretest-posttest designs are efforts to preserve the internal validity of the assessment in the absence of experimental control. The objective is a generalizable result. The most ambitious attempts to substitute statistical control for experimental control involve forms of multiple regression models.

Models

The term model can be used to describe a “particular aspect of a given theory” (Fried 2020) or a program to be emulated. In the model as theory, the undergraduate research program is described for replication with adherence to the original method, that is, the program is generalizable. The model as theory suggests that the reader will see a SPUR report that describes an outcome for a sample (usually of undergraduate students) that will generalize to a population. Because URSCI programs seldom follow the formula for assertions of generalization, namely, randomly selecting student participants from the student population and randomly assigning students to treatment and control groups (see Haeger et al. 2020), researchers exploring the nature of undergraduate research employ various statistical methods as a substitute for randomization. The goal is to estimate the main effects of the program to build a theory of URSCI. Student participants in these programs tend to be diverse and so confound the main effects of the program. How do researchers attempt to account for student differences? Some analyses of undergraduate research (UR) include attempts at matching non–randomly assigned program participants with nonparticipants. These analyses employ a range of techniques from simple matching to advanced regression analysis to examine whether student characteristics moderated the program outcome. Rodenbusch et al. (2016), for example, reported that regression analysis of race/ethnicity, gender, and first-generation undergraduate status yielded no significant relation to program success. Galli and Bahamonde (2018) matched UR students and comparison groups on grade point average at time of program admission. Whittinghill et al. (2019) reported an analysis of 10 years of data concerning the effect of UR on graduate rates, grade point average, and entrance into graduate programs. They used propensity matching (Rosenbaum and Rubin 1983) to create a quasi-control group for comparison with the outcomes for UR researchers. Brouhle and Graham (2022) employed a probit regression model to account for possible confounding variables affecting undergraduate research students and a comparison group of nonresearchers. The technique allowed the researchers to argue that differential outcomes, such as the superior grade point averages of the undergraduate research students, were not based on a confounding variable. Sell, Naginey, and Stanton (2018) compared the grade point averages of students with research experience with those who did not, for both contemporary students and graduates. For graduates, propensity matching was used to form a matched comparison group to the undergraduate research group. The analysis, which matched the groups on eight variables including gender and first-generation undergraduate status, found significantly higher grade point averages for research students.

Large-scale programs, or programs that consolidate data over several years, recognize that the student is a heterogenous variable, that is, within the student sample there are many subsamples. These subsamples may be classified by race, ethnicity, gender, or culture. Large-scale programs intend to benefit all students, so quantitative methods are employed to show how well the program results in a general main effect. Some large-scale programs test for differences between student subsamples on a quantitative measure and simply report that no differences were found (Shaffer et al. 2014). Others use sophisticated modeling to eliminate the influence of possible confounds. Hanauer et al. (2017) examined the impact of the SEA-PHAGES undergraduate research program in biology on student success while accounting for a variety of student characteristics. They reported equally positive outcomes for students with diverse economic backgrounds, academic performance, gender, and ethnicity. The intent of these approaches is that they attempt to preserve the idea of the general reference population, that demographic and economic identities of students are confounding variables that may be removed from the analysis statistically, revealing a main effect of URSCI on the general reference population of undergraduate students.

The third use of validity is external validity, usually defined as the degree to which research findings generalize to other populations, settings, or times. The usual argument is that the results drawn from a sample generalize to a reference population. The construct of the reference population to which studies generalize has been questioned by awareness of how WEIRD (Western, educated, industrialized, rich, and democratic) cultural participants in psychological research skew the results away from generalizability (American Psychological Association 2010). Reports published in SPUR seem cognizant of the need to address multiple student populations, an approach sometimes termed culturally responsive assessment (Baker and Henning 2022). Pursuing the goal of generalizability encourages analysts to control confounding variables such as student ethnicity or gender. Pursuing the goal of transferability encourages the consideration of these variables as support factors that are not neutralized but optimized to promote student success. Following Cartwright and Hardie (2012), researchers should be free to optimize support factors rather than to suppress confounding variables. Support factors are “other members of the team of causes” that optimize success. For example, reported successes for undergraduate research in genomics (Lopatto et al. 2008) originated at an institution known for high student selectivity and good financial resources. Reported success of the same program at community colleges (Croonquist et al. 2023) required the recognition that many support factors of the community college programs differed from those in the early reports. Further examples of diverse yet effective programs may be found in the SPUR special issue published in summer 2018, which highlighted culturally relevant programs (Boudreau et al. 2018; Puniwai-Ganoot et al. 2018) that reported effectiveness without claiming to be replications of a standard method. Each program deployed a package of support factors to optimize the program’s success. Whereas studies in pursuit of generalizable results set aside variables such as gender, ethnicity, and socioeconomic status, culturally relevant programs foreground these variables and employ the necessary support factors to facilitate the program’s, and the student’s, success. SPUR reports often suggest model programs that may be emulated (Dickter et al. 2018; Follmer et al. 2017; Foster and Usher 2018; Gilbertson et al. 2021; Gould 2018). The approach makes sense, given that SPUR is a trading post of ideas across academic disciplines and interdisciplines.

SPUR and its parent organization CUR value diversity and equity. Equity is typically taken to mean that different students need adjustments to correct for imbalances and obstacles to success. Equity is a support factor. Equity adjustments imply that students are not replicates of each other. The challenge, then, is to find measures of program effectiveness that includes the individual differences of student participants. For this purpose, it is necessary to reimagine a common distinction in assessment research between direct and indirect measures of student behavior. Direct measures of student learning are said to include tests of knowledge such as exams and quizzes. Indirect measures of student learning include quantitative self-reports found in surveys. Although the multioperational approach to assessment (Cook and Campbell 1979) recommends the use of both measures rather than relying on one, direct measures have been enshrined as superior to student self-reported measures. Within URSCI programs the privileged status of direct measures needs to be interrogated, given that many programs encourage students to create unique products, artifacts, or scholarly reports. The interrogation may proceed in this way: Indirect measure of student behavior, that is, self-reported quantitative ratings, seem to cast the student as an audience to some instructional performance. The self-report is often anonymous, preventing the appreciation of the role of the student’s identity in their experience. In undergraduate research, scholarship, and creative inquiry the student is an active participant (but see Grindle et al. 2021). Their experience is necessarily interpreted through the lens of their personal identity. URSCI experiences may modify or enlarge the student’s identity with respect to professionalism or joining a community of scholars (Palmer et al. 2018). Rigorous statistical modeling treats aspects of identity as confounding variables that need to be partitioned from the main effect of URSCI so that a generalizable treatment effect may be uncovered. Standard quantitative methods such as analysis of variance or multiple linear regression treat the interaction of the independent variable and the student’s identity as an isolatable, additive, and linear component of the experience. If the goal of the assessment is not, however, a generalization from the student sample to a unitary reference population, then we may become interested in the student’s identity as a support factor for the program’s success. The joint effect of a program and the student’s identity is not an interaction but an intersection. The individual differences of the students become a focus of assessment, and the student’s survey data evolves from indirect measure to direct measure. Self-report becomes self-disclosure. Self-disclosure offers the most direct measure of the student’s URSCI experience. The challenge going forward is to optimize the use of quantitative methods to find precise descriptors of student outcomes while preserving the individual differences in student success.

The continuing challenge for faculty and staff who administer undergraduate research programs will be the nearly compulsory assessment of student learning and attitude. The work may seem challenging to program faculty and staff who do not regularly employ quantitative methods. Consulting the myriad online courses, websites, and videos concerning statistics may be off-putting. A less abrasive introduction to quantitative methods may be sources such as Statistics Done Wrong (Reinhart 2015) or Statistics As Principled Argument (Abelson 1995), books that address common problems of quantitative decision-making without elaborate formulas. Similarly, The Craft of Research (Booth et al. 2016), although it does not cover statistical analysis, has a useful chapter on communicating evidence visually. For readers wishing to tutor themselves in statistical techniques there are Statistics Unplugged (Caldwell 2013) and Statistics for the Terrified (Kranzler 2003). For issues concerning quasi-experimental design and threats to validity, Cook and Campbell (1979) remains a standard text.

Encouraging best practices includes encouraging the practitioner. The ongoing explorations in programs for undergraduate research, scholarship, and creative inquiry will best be sustained if they are beneficial to the student and the mentor. Quantitative methods may provide a perspective through which the benefits may be discerned. The construction of this perspective and the picture that emerges provide a shared journey for all participants.

Conflict of Interest

The author has no conflict of interest.

IRB Statement

Not applicable.

Data Availability

Not applicable.

References

Abelson, Robert P. 1995. Statistics As Principled Argument. Hillsdale, NJ: Lawrence Erlbaum.

American Psychological Association. 2010. “Are Your Findings ‘WEIRD’?” Monitor on Psychology 41(5): 11. Accessed July 17, 2023. https://www.apa.org/monitor/2010/05/weird

Ashcroft, Jared, Veronica Jaramillo, Jillian Blatti, Shu-Sha Angie Guan, Amber Bui, Veronica Villasenor, Alina Adamian, et al. 2021. “BUILDing Equity in STEM: A Collaborative Undergraduate Research Program to Increase Achievement of Underserved Community College Students.” Scholarship and Practice of Undergraduate Research 4(3): 47–58. doi: 10.18833/spur/4/3/11

Baker, Gianina R., and Gavin W. Henning. 2022. “Current State of Scholarship on Assessment.” In Reframing Assessment to Center Equity: Theories, Models, and Practices, ed. Gavin Henning, Gianina R. Baker, Natasha A. Jankowski, Anne E. Lundquist, and Erick Montenegro, 57–79. Sterling, VA: Stylus.

Barney, Christopher C. 2017. “An Analysis of Funding for the NSF REU Site Program in Biology from 1987 to 2014.” Scholarship and Practice of Undergraduate Research 1(1): 11–19. doi: 10.18833/spur/1/1/1

Beer, Francisca, Christina M. Hassija, Arturo Covarrubias-Paniagua, and Jeffrey M. Thompson. 2019. “A Peer Research Consultant Program: Feasibility and Outcomes.” Scholarship and Practice of Undergraduate Research 2(3): 4–13. doi: 10.18833/spur/2/3/4

Booth, Wayne C., Gregory G. Colomb, Joseph M. Williams, Joseph Bizup, and William T. Fitzgerald. 2016. The Craft of Research. 4th ed. Chicago: University of Chicago Press.

Boudreau, Kristin, David DiBiasio, and Zoe Reidinger. 2018. “Undergraduate Research and the Difference It Makes for LGBTQ+ Students.” Scholarship and Practice of Undergraduate Research 1(4): 46–47. doi: 10.18833/spur/1/4/1

Brooks, Andrea Wilcox, Jane Hammons, Joseph Nolan, Sally Dufek, and Morgan Wynn. 2019. “The Purpose of Research: What Undergraduate Students Say.” Scholarship and Practice of Undergraduate Research 3(1): 39–47. doi: 10.18833/spur/3/1/7

Brouhle, Keith, and Brad Graham. 2022. ”The Impact of Undergraduate Research Experiences on Graduate Degree Attainment across Academic Divisions.” Scholarship and Practice of Undergraduate Research 6(1): 32–42.

Brown, Daniel A., Nina B. Wright, Sylvia T. Gonzales, Nicholas E. Weimer, and Julio G. Soto. 2020. “An Undergraduate Research Approach That Increased Student Success at a Hispanic-Serving Institution (HSI): The SURE Program at Texas State University.” Scholarship and Practice of Undergraduate Research 4(1): 52–62. doi: 10.18833/spur/4/1/18

Caldwell, Sally. 2013. Statistics Unplugged. 4th ed. Belmont, CA: Wadsworth, Cengage Learning.

Campbell, Donald T., and Donald W. Fiske. 1959. “Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix.” Psychological Bulletin 56: 81105.

Cartwright, Nancy. 2007. Hunting Causes and Using Them: Approaches in Philosophy and Economics. Cambridge, UK: Cambridge University Press.

Cartwright, Nancy, and Jeremy Hardie. 2012. Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford, UK: Oxford University Press.

Cohen, Jacob. 1990. “Things I Have Learned (So Far).” American Psychologist 45: 13041312.

Cook, Thomas D., and Donald T. Campbell. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Boston: Houghton Mifflin.

Croonquist, Paula, Virginia Falkenberg, Natalie Minkovsky, Alexa Sawa, Matthew Skerritt, Maire K. Sustacek, Raffaella Diotti, et al. 2023. “The Genomics Education Partnership: First Findings on Genomics Research in Community Colleges.” Scholarship and Practice of Undergraduate Research 6(3): 1728. doi: 10.18833/spur/6/3/1

Deming, W. Edward. 1953. “On the Distinction between Enumerative and Analytic Surveys.” Journal of the American Statistical Association 48: 244–255.

Dickter, Cheryl L., Anne H. Charity Hudley, Hannah A. Franz, and Ebony A. Lambert. 2018. “Faculty Change from Within: The Creation of the WMSURE Program.” Scholarship and Practice of Undergraduate Research 2(1): 24–32. doi: 10.18833/spur/2/1/6

Dillon, Heather E. 2020. “Development of a Mentoring Course-Based Undergraduate Research Experience (M-CURE).” Scholarship and Practice of Undergraduate Research 3(4): 26–34. doi: 10.18833/spur/3/4/7

Follmer, D. Jake, Sarah Zappe, Esther Gomez, and Manish Kumar. 2017. “Student Outcomes from Undergraduate Programs: Comparing Models of Research Experiences for Undergraduates (REUs).” Scholarship and Practice of Undergraduate Research 1(1): 20–27. doi: 10.18833/spur/1/1/5

Foster, Stephanie L., and Bethany M. Usher. 2018. “Comparing Two Models of Undergraduate Research Using the OSCAR Student Survey.” Scholarship and Practice of Undergraduate Research 1(3): 30–39. doi: 10.18833/spur/1/3/6

Fried, Eiko I. 2020. “Theories and Models: What They Are, What They Are For, and What They Are About.” Psychological Inquiry 31: 336–344. doi: 10.1080/1047840X.2020.1854011

Galli, Dominique M., and Rafael Bahamonde. 2018. “Assessing IUPUI’s Diversity Scholars Research Program: Lessons Learned.” Scholarship and Practice of Undergraduate Research 1(4): 12–17. doi: 10.18833/spur/1/4/10

Garrett, Arnell, Frances D. Carter-Johnson, Susan M. Natali, John D. Schade, and Robert M. Holmes. 2021. “A Model Interdisciplinary Collaboration to Engage and Mentor Underrepresented Minority Students in Lived Arctic and Climate Science Research Experiences.” Scholarship and Practice of Undergraduate Research 5(1): 16–26. doi: 10.18833/spur/5/1/4

Gerring, John. 2012. “Mere Description.” British Journal of Political Science 42: 721–746.

Gilbertson, Lynn, Jeannine Rowe, Yeongmin Kim, Catherine W. M. Chan, Naomi Schemm, and Michael Unhoch. 2021. “An Online Training Program to Enhance Novice Researchers’ Knowledge and Skills.” Scholarship and Practice of Undergraduate Research 4(4): 33–41. doi: 10.18833/spur/4/4/4

Gold, A. U., Rachel Atkins, and Karen S. McNeal. 2021. “Undergraduates’ Graph Interpretation and Scientific Paper Reading Shift from Novice- to Expert-Like as a Result of Participation in a Summer Research Experience: A Case Study.” Scholarship and Practice of Undergraduate Research 5(2): 8–19. doi: 10.18833/spur/5/2/2

Goodwin, Kerri A., and C. James Goodwin. 2017. Research in Psychology: Methods and Design. Las Vegas, NV: Wiley.

Gould, Laurie. 2018. “Introduction: Models of Undergraduate Research Mentoring.” Scholarship and Practice of Undergraduate Research 2(1): 2–3. doi: 10.18833/spur/2/1/10

Grindle, Nicholas, Stefanie Anyadi, Amanda Cain, Alastair McClelland, Paul Northrop, Rebecca Payne, and Sara Wingate Gray. 2021. “Re-Evaluating Passive Research Involvement in the Undergraduate Curriculum.” Scholarship and Practice of Undergraduate Research 5(1):52–58. doi: 10.18833/spur/5/1/12

Haeger, Heather, John E. Banks, Camille Smith, and Monique Armstrong-Land. 2020. “What We Know and What We Need to Know about Undergraduate Research.” Scholarship and Practice of Undergraduate Research 3(4): 62–69. doi: 10.18833/spur/3/4/4

Hanauer, David I., Mark J. Graham, SEA-PHAGES, Laura Betancur, Aiyana Bobrownicki, Steven G. Cresawn, Rebecca A. Garlena, et al. 2017. “An Inclusive Research Education Community (iREC): Impact of the SEA-PHAGES Program on Research Outcomes and Student Learning.” Proceedings of the National Academy of Sciences 114: 13531–13536. doi: 10.1073/pnas.1718188115

Hunter, Anne-Barrie, Timothy Weston, Sandra L. Laursen, and Heather Thiry. 2009. “URSSA: Evaluating Student Gains From Undergraduate Research in the Sciences.” CUR Quarterly 29(3): 15–19.

Kass, Robert E., Brian S. Caffo, Marie Davidian, Xiao-Li Meng, Bin Yu, and Nancy Reid. 2016. “Ten Simple Rules for Effective Statistical Practice.” PLOS Computational Biology 12(6): e1004961. doi: 10.1371/journal.pcbi.1004961

Killion, Patrick J., Ian B. Page, and Victoria Yu. 2019. “Big-Data Analysis and Visualization as Research Methods for a Large-Scale Undergraduate Research Program at a Research University.” Scholarship and Practice of Undergraduate Research 2(4): 14–22. doi: 10.18833/spur/2/4/7

Kranzler, John H. 2003. Statistics For The Terrified. 3rd ed. Upper Saddle River, NJ: Pearson Education.

Kuan, Jennifer, and Quentin C. Sedlacek. 2022. “Does It Matter If I Call It a CURE? Identity Development in Online Entrepreneurship Coursework.” Scholarship and Practice of Undergraduate Research 6(1): 2331. doi: 10.18833/spur/6/1/7

LaPlant, James T. 2017. “Welcome to the Inaugural Issue of SPUR.” Scholarship and Practice of Undergraduate Research 1(1): 3–4.

Lopatto, David. 2004. “Survey of Undergraduate Research Experiences (SURE): First Findings.” Cell Biology Education 3: 270–277. doi: 10.1187/cbe.04-07-0045

Lopatto, David, Consuelo Alvarez, Daron Barnard, Chitra Chandrasekaran, Hui-Min Chung, Charles Du, Todd Eckdahl, et al. 2008. “Genomics Education Partnership.” Science 322: 684–685. doi: 10.1126/science.1165351

Lukes, Laura A., Katherine Ryker, Camerian Millsaps, Rowan Lockwood, Mark D. Uhen, Christian George, Callan Bentley, and Peter Berquist. 2019. “Leveraging a Large Database to Increase Access to Undergraduate Research Experiences.” Scholarship and Practice of Undergraduate Research 2(4): 4–13. doi: 10.18833/spur/2/4/6

McLaughlin, Jacqueline S., Mit Patel, and Joshua B. Slee. 2020. “A CURE Using Cell Culture–Based Research Enhances Career-Ready Skills in Undergraduates.” Scholarship and Practice of Undergraduate Research 4(2): 49–61. doi: 10.18833/spur/4/2/15

Naepi, Sereana, and Airini. 2019. “Knowledge Makers: Indigenous Student Undergraduate Researchers and Research.” Scholarship and Practice of Undergraduate Research 2(3): 52–60. doi: 10.18833/spur/2/3/7

Nelson, Randy B., Kideste Mariam Yusef, and Adrienne Cooper. 2019. “Expanding Minds through Research: Juvenile Justice and Big Data.” Scholarship and Practice of Undergraduate Research 2(4): 30–36. doi: 10.18833/spur/2/4/10

Nicols-Grinenko, Annemarie, Rachel B. Verni, Jennifer M. Pipitone, Christin P. Bowman, and Vanya Quinones-Jenab. 2017. “Building a Culture of Undergraduate Research: A Case Study.” Scholarship and Practice of Undergraduate Research 1(2): 43–51. doi: 10.18833/spur/1/2/13

Palmer, Ruth J., Andrea N. Hunt, Michael R. Neal, and Brad Wuetherick. 2018. “The Influence of Mentored Undergraduate Research on Students’ Identity Development.” Scholarship and Practice of Undergraduate Research 2(2): 4–14. doi: 10.18833/spur/2/2/1

Puniwai-Ganoot, Noelani, Sharon Ziegler-Chong, Rebecca Ostertag, and Moana Ulu Ching. 2018. “Mentoring Pacific Island Students for Conservation Careers.” Scholarship and Practice of Undergraduate Research 1(4): 25–32. doi: 10.18833/spur/1/4/11

Reinhart, Alex. 2015. Statistics Done Wrong: The Woefully Complete Guide. San Francisco: No Starch Press.

Rodenbusch, Stacia E., Paul R. Hernandez, Sarah L. Simmons, and Erin L. Dolan. 2016. “Early Engagement in Course-Based Research Increases Graduation Rates and Completion of Science, Engineering, and Mathematics Degrees.” Cell Biology Education–Life Sciences Education 15(2): ar20. doi: 10.1187/cbe.16-03-0117

Rosenbaum, Paul R., and Donald B. Rubin. 1983. “The Central Role of the Propensity Score in Observational Studies for Causal Effects.” Biometrika 70: 41–55. doi: 10.1093/biomet/70.1.41

Sell, Andrea J., Angela Naginey, and Cathy Alexander Stanton. 2018. “The Impact of Undergraduate Research on Academic Success.” Scholarship and Practice of Undergraduate Research 1(3): 19–29. doi: 10.18833/spur/1/3/8

Shaffer, Christopher D., Consuelo J. Alvarez, April E. Bednarski, David Dunbar, Anya L. Goodman, Catherine Reinke, Anne G. Rosenwald, et al. 2014. “A Course-Based Research Experience: How Benefits Change with Increased Investment in Instructional Time.” Cell Biology Education–Life Sciences Education 13: 111–30. doi: 10.1187/ cbe-13-08-0152

Spronken-Smith, Rachel, Sally Sandover, Lee Partridge, Andy Leger, Tony Fawcett, and Liz Burd. 2018. “The Challenges of Going Global with Undergraduate Research: The Matariki Undergraduate Research Network.” Scholarship and Practice of Undergraduate Research 2(2): 64–72. doi: 10.18833/spur/2/2/8

Szecsi, Tunde, Charles Gunnels, Jackie Greene, Vickie Johnston, and Elia Vazquez-Montilla. 2019. “Teaching and Evaluating Skills for Undergraduate Research in the Teacher Education Program.” Scholarship and Practice of Undergraduate Research 3(1): 20–29. doi: 10.18833/spur/3/1/5

Tian, Jing, Yiheng Wang, Ghang Ren, and Yingzhe Lei. 2022. “Undergraduate Research and Inquiry-Based Learning in Geographical Information Science: A Case Study from China.” Scholarship and Practice of Undergraduate Research 5(4): 16–23. doi: 10.18833/spur/5/4/8

Tufte, Edward R. 1983. The Visual Display of Quantitative Information. Cheshire, CT: Graphics.

Webb, Eugene J., Donald T. Campbell, Richard D. Schwartz, Lee Sechrest, and Janet B. Grove. 1981. Nonreactive Measures in the Social Sciences. Boston: Houghton Mifflin.

Whittinghill, Jonathan C., Simeon P. Slovacek, Laura P. Flenoury, and Vivian Miu. 2019. “A 10-Year Study on the Efficacy of Biomedical Research Support Programs at a Public University.” Scholarship and Practice of Undergraduate Research 3(1): 30–38. doi: 10.18833/spur/3/1/3

Zhen, Willa. 2020. “Teaching Research Skills to Vocational Learners: Teaching Chefs to Research.” Scholarship and Practice of Undergraduate Research 4(2): 21–26. doi: 10.18833/spur/4/2/6

David Lopatto
Grinnell College
lopatto@grinnell.edu

David Lopatto is a professor of psychology and the Samuel R. and Marie-Louise Rosenthal Professor of Natural Science and Mathematics at Grinnell College. He is the former director of the Grinnell College Center for Teaching, Learning, and Assessment. He has been studying the features and benefits of undergraduate research experiences for many years, creating instruments, including the Survey of Undergraduate Research Experiences (SURE) and the Survey of Classroom Undergraduate Research Experiences (CURE), which may be found at https://sure.sites.grinnell.edu.

SUREbyts: Presenting Early-Year Undergraduate Students with Videos on Research Topics

SUREbyts: Presenting Early-Year Undergraduate Students with Videos on Research Topics

Embedding research into the undergraduate curriculum has been shown to be a highly impactful pedagogical approach across all disciplinary areas (Walkington 2015). By engaging with structured research opportunities as part of their undergraduate studies, students are encouraged to creatively explore the topics being taught while also developing important disciplinary and transversal skills (Healey and Jenkins 2009). The opportunity for students to engage fully, or partially, with a research project and then present their findings at an undergraduate research conference or publish their findings in a journal has attracted substantial attention in recent decades, as evidenced by the proliferation of dissemination platforms for undergraduate research (Barker and Gibson 2022). These opportunities, however, tend to focus primarily on students at the latter end of their undergraduate studies. Despite this, there is increased attention in the literature on how undergraduate students at the earlier stages of their studies can become involved in, or exposed to, research projects (Shelby 2019; Wolkow et al. 2014). This article describes one project that shares this objective: the SUREbyts project.

The SUREbyts project allows first- and second-year undergraduate students to engage with research through a collection of video recordings in which experienced and early-stage researchers describe a problem, pose a question and possible solutions related to the problem, and then describe their research-informed view on the most appropriate solution. These videos, covering many of the prominent scientific disciplines, are freely available to all lecturers to use in class with their students under a Creative Commons BY-NC-ND 4.0 license. Suggested uses include integrating SUREbyts into a discussion regarding the topic of the video or using SUREbyts as part of a formative or summative assessment. Of the 294 students who responded to a survey about their engagement with SUREbyts, the majority reported that it had increased their interest in research in general, and their understanding of the work undertaken by researchers specifically. There are challenges, however, associated with this approach. Researchers often find it difficult to present their research in an accessible fashion, appropriate for early-stage undergraduate students. Creating an interesting and engaging video requires careful guidance and usually several design iterations. Additionally, lecturers require guidance on how to incorporate these videos meaningfully into their teaching, as misaligned use can result in a negative student learning experience.

The next sections describe the SUREbyts project in detail. The article concludes with a set of recommendations to institutions that are considering implementation of such a project using SUREbyts as a model. Institutions that do so will be well equipped to enhance the awareness of research among their first- and second-year students.

SUREbyts

The Science Undergraduate Research Experience (SURE) Network (O’Leary et al. 2021) launched the SUREbyts project in 2021 with the objective of enhancing research awareness at the early stages of undergraduate programs in the sciences in Ireland. Through SUREbyts, experienced researchers and postgraduate research students were invited to record a brief video (a SUREbyt) centered on a question related to their research. The videos were then made available on the website of the SURE Network (SURE Network 2021), from where both students and lecturers could access them. Lecturers were encouraged to use SUREbyts videos in class to help their students learn about the research that was taking place within their discipline.

Video was chosen as the medium for this project for a variety of reasons, including ensuring that the research that was taking place throughout the network could be showcased to all students; and enabling the content to be reviewed and edited in advance of its use to ensure that it meets the requirements of the project. Of most relevance for this article, the SURE Network has ambitions for the SUREbyts collection of resources and the SUREbyts model to be adopted by institutions beyond Ireland. The collection currently comprises 34 SUREbyts videos that are freely available for use under a BY-NC-ND 4.0 Creative Commons license. To better understand the impact of the SUREbyts model, the project team surveyed lecturers and students who had used the SUREbyts resources. Thirteen lecturers and 294 students replied to the online surveys. The feedback obtained, both positive and negative, shapes the remaining sections and provides guidance to others who wish to either contribute to, use, or replicate the SUREbyts model.

Format

A SUREbyt is a 10- to 12-minute video designed to provoke a discussion among students when played in class. In the video, students are informed about the research career and work of a research student or professional researcher at their own or another institution. The students are then presented with a question related to that work and three possible solutions. This can be thought of as the type of question that might be offered to an audience with a request for a show of hands on the most suitable answer. A break in the video then shows a countdown clock for two minutes, during which time students are encouraged to discuss the possible solutions with their nearest classmates. The second part of the video then presents the researcher’s own view on the best solution. Often, the researcher will explain that they have a preferred solution but that other researchers do not share their view. It is important that students are exposed to this type of discourse so that they appreciate that research does not always result in one, true answer, and that it is acceptable for researchers to hold diverse views based on their own findings.

In part 1 of the SUREbyt video, shown in Figure 1, the researcher introduces themself and their research and presents a question and possible solutions. In part 2, also shown in Figure 1, the researcher’s preferred solutions are presented and justified. Both parts are fully developed by the researcher, based on strict, but accessible, guidelines available through the SUREbyts website. The researcher then submits their videos to their institutional SUREbyts point of contact, as shown in Figure 2. The institutional point of contact reviews the video and may request edits or may liaise with the central coordinators of the SUREbyts project who review the videos for quality and adherence to the published guidelines. When complete, the researcher will submit both parts with a signed consent form to the project team. The two parts are then edited into the final format shown in Figure 1 by the SUREbyts project team. At this stage, a themed introduction and outro are added to bookend the videos, and a two-minute countdown clock is inserted between the two parts. Once finalized the SUREbyt video is published and categorized by discipline on the SURE Network website, where is it made available at a unique URL. Many videos are multidisciplinary and appear in multiple categories, helping alert students to the importance of research that transcends subject boundaries. The creator of a published SUREbyt video can apply for recognition with a digital badge issued by the SURE Network.

Collection

A primary metric of success for the SUREbyts project was the recruitment of 34 researchers from around Ireland, in all the SURE Network’s partner institutions, to create the videos. Of these, 19 were lecturers who were actively involved in research, and 15 were postgraduate research students. The mix of creators at different stages of their career meant that the full SUREbyts collection was representative of the diversity of experience that features in the research landscape. It also provided early-stage researchers and postgraduate students with a means of disseminating their research and enhancing both its engagement and impact, a common requirement of grant-awarding bodies. Equally important was the diversity of disciplinary areas, as shown in Table 1. Thirty-four SUREbyts videos were published, with several in multiple categories.

The project resulted in a collection of cutting-edge research videos addressing accessible, engaging topics and featuring questions that were designed for a novice audience. The most popular of the videos was titled Feeding Martian Colonies. In this SUREbyt video, the creator, a postgraduate student, explained her research background and project, which related to hydroponics. Following a four-minute description of her research, the researcher posed the question, “How are we going to feed Martian colonies?” and offered three solutions: (a) mix Martian soil with “human fertilizer” (urine and feces); (b) send constant resupply missions from Earth; (c) soil-less growth under controlled environment. At this point the video moves to a two-minute break so that viewers can consider the possible solutions, discussing them as appropriate with classmates. In the final part of the video, the researcher explained why the third option was her preferred solution and related this to her own current research. This SUREbyt video attracted approximately one-quarter of all the hits for the whole collection. Other popular videos cover a range of disciplinary areas. Titles include Pond Water, Endocrine-Disrupting Chemicals, Walking, Microbial Growth Strategies, and Tsunami.

Quality

The SUREbyts coordinators evaluated the quality of each of the SUREbyts videos against a set of technical requirements, a set of formatting requirements including the length of video, and the requirement for the video to be engaging for novice science students. Survey respondents subsequently helped further develop understanding of quality.

Survey feedback has suggested a diversity of quality among the videos and a dissatisfaction among students when the videos do not adhere fully to the guidelines. This is evidenced by one respondent’s comment about one of the videos that was almost 20 minutes in length.

I definitely felt some were of higher quality (the ones I used) than others—so it would be great if they were continually updated to give more choice. Students seemed to enjoy them but the group who watched the microplastics one felt it was too long—I really enjoyed this one in particular and so do not agree but thought this feedback may be useful (one student told me she increased the speed so that it was more watchable!).

Students also commented on the need for “simpler language and avoiding terminology” and that “there should not be much written text on the screen.” Students were frustrated by poor-quality recordings and the need to “improve the mic quality [because] some background noises could be heard and the audio was difficult to comprehend because of this.” There is a balance to be struck between the requirements set out for video creators, which may serve as barriers to their participation, and the requirement for high-quality videos.

Suggestions from survey respondents on how to improve the videos included the addition of subtitles to the videos and the inclusion of quizzes at the end of each video. The addition of subtitles is easily achieved through software automation and will be implemented for the next iteration of SUREbyts videos. The addition of quizzes was given consideration, but it was felt this would alter the purpose of the SUREbyt video, which is intended to focus on a single focal question in a classroom situation. Lecturers may decide to build quizzes related to the content of the video within the instructional context in which the video is used. It is important, however, that the overall burden on the creator of the video is kept to a minimum, as the success of SUREbyts is dependent upon the willingness of busy researchers and research students taking the time to develop accessible, engaging videos centered upon a carefully designed question.

What is clear is that students and lecturers have a very good sense of what constitutes good quality, and this is reflected in the popularity of certain videos. Popularity is driven, in the first instance, by the lecturer who decides on which video to use in their class, and how to use it.

Instruction

Lecturers in first- and second-year modules in SURE Network partner institutions were encouraged to use the SUREbyts resources as part of the learning design for their classes. As with the video creators, lecturers could apply to the SURE Network for a digital badge once they had incorporated SUREbyts into their classes.

A dedicated online session was arranged for lecturers to explore different ways in which the resources could be used on their courses. Of these approaches, which are described on the SURE Network website, the one that was adopted by the majority of lecturers was “class opener.” With this method, a lecturer commences a class by playing the SUREbyt video from start to finish. When the middle part of the video plays, students are asked to discuss the possible solutions with each other, which they do again after the video completes. The lecturer then relates the subject of the SUREbyt to the topic under discussion in that week’s class. Other approaches such as “class bridge,” in which the playing of the video is divided between sessions, were also adopted by some lecturers. Others innovated and developed their own approach to using the videos, such as this lecturer:

I used the videos in a slightly different way than what was perhaps intended. First, I used the videos at the start of the semester as an ice breaker. This enabled the students to initiate conversations with each other, and it was very effective—the noise from the conversations was very loud!

Based on survey responses, the perception of lecturers on the value of the SUREbyts videos was generally positive, but not universally so. Nine of the 13 lecturers surveyed (69 percent) felt that their students’ awareness of research was enhanced through their engagement with SUREbyts. Ten of the lecturers (77 percent) said that they would use the videos again, with seven of that group (54 percent) “very much” likely to do so. These lecturers identified how the videos they used were good triggers for discussion, with one lecturer commenting that:

The videos were perfectly pitched for first-year students who really engaged and considered the questions posed. The videos were great examples of real-world applications of computing research that were clearly presented at the right level for students.

However, other lecturers felt that the introduction of subject matter relating to postgraduate research was not appropriate for the early stages of first-year undergraduates. One lecturer responded in the survey with the following view:

For the vast majority of first-years in semester 1, which is the only time I teach these groups, they are not ready to start thinking about postgraduate research.

Another lecturer felt that the material presented was more appropriate for more experienced students, commenting that they “felt that second-year students responded better.” The same lecturer struggled to find time in their class for the use of the SUREbyts resources, and decided to “provide them with a list of videos and links to use in their own time.” The videos were designed to be used in class, and ideally for first- and second-year groups, so the feedback helped surface both an inconsistency in target level across different videos and a need to be aware of uses inconsistent with the design.

An overriding objective of SUREbyts is to increase the awareness of research as an activity, with a secondary objective being to raise disciplinary knowledge among students. Greater than 60 percent of the 294 students surveyed agreed that SUREbyts enhanced their understanding of the work of researchers (73 percent), their interest in research in their area (62 percent), and their interest in carrying out research in the future (65 percent). Fewer than this, although still a majority (54 percent), felt that they had an increased understanding of the topics they were studying in their program. Some student feedback was glowing in praise:

All of this information that I have gathered from her astounding video has allowed me to ponder the world of horticulture. I never expected to be interested in such topics however, through SUREbyt videos I am sure I will discover many new academic discoveries.

Other students, however, shared the view of some lecturers that the videos are more appropriate for later stage students:

As an introduction to new students who have no idea about computer science and are new to it, it is confusing, but for ones who have knowledge about the area it is an interesting and further opening to the subject matter of machine learning.

In general, feedback suggests that both lecturers and students recognized the value of the resources in starting in-class discussion, such as this lecturer:

For the module that I am teaching students need to create a technology solution (high-level prototype design) to address one or more of the SDGs (sustainable development goals), so these examples served as a great point of discussion on how we can design technology to address real-world problems and consider the needs of end users. This is a great resource that I will certainly use in future!

This highlights the importance of the resources being used as part of a facilitated session or class, rather than as a stand-alone web-based resource. The videos are designed to commence, or contribute to, a discussion, for which the role of the lecturer is essential.

Recommendations

The SUREbyts project developed an innovative format for brief videos intended to be used to introduce early-stage undergraduate students to real research projects that are taking place in higher education institutions. The project produced detailed guidelines for video creators and users. The project had a mixed but generally positive response from lecturers and students, as detailed in earlier sections. Based on the experience of running the project, the authors of this article present recommendations in the sections below to other institutions that may wish to adopt some or all aspects of the SUREbyts project.

Video Development

Researchers and research students tend to be time-poor but eager for recognition for their research. Research students should be advised on how the creation of videos for instruction can fulfill the dissemination requirements of their research grants, and help raise their profile. Lecturers and researchers should be made aware of how teaching of undergraduates can have benefits for active researchers (Feller 2018), and of how research and teaching can support each other (Ashrafa 2010). SUREbyts digital badges were made available to the creators and users of videos, although very few badges were applied for in practice.

Focus

Digital learning provides a means through which otherwise abstract or unknown concepts can be “illustrated and become tangible” (Kerres and Otto 2022, 701). The illustration of the concept, the question, and the possible solutions are central to the quality of the SUREbyts video. It is important to ensure that the creators of the videos are focused from the start on identifying and presenting a clear, easily understood question that will engage their audience in a meaningful discussion. All other aspects of the SUREbyts video will pivot around the question. In the pilot project described here, templates, detailed guidelines, and dedicated, local support were provided to help achieve this objective.

Interpersonal Support

The SUREbyts project benefited hugely from the support of the established SURE Network (O’Leary et al. 2021). As a nationwide network with representatives in institutions throughout Ireland, SURE was able to provide local support, encouragement, and guidance to video creators. This support was invaluable for encouraging participation in the project and subsequent usage of the videos.

Barriers

It is important that as many barriers to participation as possible are lowered or removed. Creators should not have to carry out extensive editing themselves; this should be provided as part of the final production process. Although guidelines are important and should be adhered to as much as possible, some flexibility should be afforded to the makers of the videos to be creative, within reason. Those videos that stray too far from what was expected, such as an excessively long video, will not be as attractive to students and lecturers.

Revise

During the SUREbyts project, it became evident that videos that did not reach a certain threshold of production quality, accessibility of the question, appropriateness of the language used, and content of the presentation would be ignored by lecturers and students. A large disparity in usage between the popular and unpopular videos showed the value of continually revising the videos with feedback until the appropriate quality is reached. Based on this outcome, the SUREbyts group has revised the guidelines to highlight this to future collaborators and content creators.

Lecturers

Lecturers require guidance on how to use the videos effectively. SUREbyts videos should enable students to experience what Pedaste (2022) describes as the orientation phase of research engagement: a “process of stimulating curiosity about a topic and addressing a learning challenge through stating a problem.” (151) For the SUREbyts project, a series of usage scenarios was presented to lecturers to encourage them to use the videos as part of a discussion with their classes. The videos are not intended to be used in the absence of an opportunity for peer discussion. Lecturers can be supported through dedicated training sessions, online resources, and, most valuable of all, case studies of effective use.

Conclusion

Based on feedback received, SUREbyts has proven effective at raising the profile of research among early-year undergraduate students in Ireland. The project team would welcome the adoption by others of the resources, format, or overall approach developed through the project. This article has provided guidance on how to do so. It is hoped that future users will learn from the successes of the SUREbyts project and avoid some of the challenging situations that emerged during the project.

Data Availability Statement

The research instruments used to collect data are available at https://sure-network.ie/surebyts/use. The following statements regarding the storage and availability of data were agreed to with the Technological University Dublin Research Ethics and Integrity Committee:

  • Data will be stored securely, and analysis will take place within the project team, possibly with the support of a small number of administrators external to the team.
  • All data collected will be deleted upon completion of the research, no later than one year following the collection of the data.

Ethical Review Board Statement

The Research Ethics and Integrity Committee of Technological University Dublin approved this project (REC-20-183) on October 11, 2021. This approval was noted and approved by the corresponding committee at each institution at which data were collected.

Conflict of Interest Statement

No conflict of interest to declare.

Acknowledgments

The authors would like to acknowledge the support of Ireland’s National Forum for the Enhancement of Teaching and Learning in Higher Education whose network and discipline fund supported the development of SUREbyts. The authors recognize the work undertaken by the creators of the SUREbyts videos to develop a comprehensive, cross-disciplinary resource that has contributed to the teaching, learning, and assessment of undergraduate students across Ireland, and thank all video creators for this work. The authors also acknowledge and thank the lecturers and students who used the SUREbyts videos and gave up their time to contribute to the data collection for this evaluation study. Finally, the authors acknowledge the SURE Network for its support in promoting the SUREbyts project.

References

Ashrafa, Syed Salman. 2010. “Borrowing a Little from Research to Enhance Undergraduate Teaching.” Procedia Social and Behavioral Sciences 2: 5507–5511.

Barker, Emma, and Caroline Gibson. 2022. “Dissemination in Undergraduate Research: Challenges and Opportunities.” In The Cambridge Handbook of Undergraduate Research, ed. Harald A. Mieg et al., 172–182. Cambridge, UK: Cambridge University Press.

Feller, Marla B. 2018. “The Value of Undergraduate Teaching for Research Scientists.” Neuron 99: 1113–1115. doi: 10.1016/j.neuron.2018.09.005

Healey, Mick, and Alan Jenkins. 2009. Developing Undergraduate Research and Inquiry. York, UK: Higher Education Academy.https://www.advance-he.ac.uk/knowledge-hub/developingundergraduate-research-and-inquiry

Kerres, Michael, and Daniel Otto. 2022. “Undergraduate Research in Digital Learning Environments.” In The Cambridge Handbook of Undergraduate Research, ed. Harald A. Mieg et al., 695–708. Cambridge, UK: Cambridge University Press.

O’Leary, Ciarán, Julie Dunne, Barry Ryan, Therese Montgomery, Anne Marie O’Brien, Cormac Quigley, Claire Lennon, et al. 2021. “Reflections on the Formation and Growth of the SURE Network: A National Disciplinary Network to Enhance Undergraduate Research in the Sciences.” Irish Journal of Academic Practice 9(1): article 7. doi: 10.21427/z3xx-dy28

Pedaste, Margus. 2022. “Inquiry Approach and Phases of Learning in Undergraduate Research.” In The Cambridge Handbook of Undergraduate Research, edited by Harald A. Mieg et al., 149–157. Cambridge, UK: Cambridge University Press.

Shelby, Shameka J. 2019. “A Course-Based Undergraduate Research Experience in Biochemistry That Is Suitable for Students with Various Levels of Preparedness.” Biochemistry and Molecular Biology Education 47: 220–227. doi: 10.1002/bmb.21227

SURE Network. 2021. SUREbyts (website). Accessed August 29, 2023. https://sure-network.ie/surebyts

Walkington, Helen. 2015. Students As Researchers: Supporting Undergraduate Research in the Disciplines in Higher Education. York, UK: Higher Education Academy. https://www.advancehe. ac.uk/knowledge-hub/students-researchers-supporting-undergraduate-research-disciplines-higher-education

Wolkow, Thomas D., Lisa T. Durrenberger, Michael A. Maynard, Kylie K. Harrall, and Lisa M. Hines. 2014. “A Comprehensive Faculty, Staff, and Student Training Program Enhances Student Perceptions of a Course-Based Research Experience at a Two-Year Institution.” Life Sciences Education, 13: 724–737. doi: 10.1187/cbe.14-03-0056

Ciarán O’Leary

Technological University Dublin, ciaran.oleary@tudublin.ie

Ciarán O’Leary is the head of learning development for the faculty of computing, digital, and data at Technological University Dublin. O’Leary has been a lecturer in computer science at Technological University Dublin since 2000. O’Leary’s research interests relate to the entanglement of digital technology with academic practice. O’Leary was the first chairperson of the Science Undergraduate Research Experience (SURE) Network from its establishment in 2016 to 2021, and was the project lead for the SUREbyts project.

Gordon Cooke is a lecturer in biological sciences and an active researcher. Cooke completed his PhD in 2004 at the Institute of Technology Tallaght before being appointed as a Newman Fellow at University College Dublin to undertake research into Barrett’s metaplasia. Cooke joined Technological University Dublin in 2016, where he established his own research group with interests in antimicrobial resistance and extracellular vesicles. Cooke also is actively involved in educational research about technology-enhanced learning, student retention, and student resilience.

Julie Dunne has a PhD in chemistry, an MA in higher education, and is a fellow of the Royal Society of Chemistry and a member of the Institute of Food Science and Technology, Ireland. After working in the pharmaceutical industry, Dunne joined Technological University Dublin in 2003 and is currently the head of the School of Food Science and Environmental Health. Dunne’s research interests include work-integrated learning, undergraduate research, education for sustainable development, green biocatalysis, and carbohydrate-based antimicrobials.

Barry Ryan is a biochemistry lecturer currently on secondment to lead the development of the educational model for Technological University Dublin. He promotes (co-) creation to empower and centralize all students across all levels within undergraduate curricula. Ryan is passionate about implementing research-informed teaching and supporting others to develop in this area. Ryan is concurrently a senior fellow of the Higher Education Academy, a National Forum Teaching and Learning research fellow, and a chartered science teacher.

Carla Surlis is an early-stage researcher and lecturer in molecular genetics, specializing in the area of small RNA interactions in human disease. Surlis is enthusiastic about using digital technologies to improve engagement in undergraduate teaching and learning.

Matt Smith is a senior lecturer in computing in the faculty of computing, digital, and data at Technological University Dublin. Smith’s research focuses on interactive multimedia and extended reality technologies, and its applications to support computer-supported learning. He leads the Digital Realities, Interaction and Virtual Environments research group.

Emma Caraher is a lecturer in biopharmaceutical sciences at the School of Chemical and BioPharmaceutical Sciences at Technological University Dublin. Caraher completed her PhD in 1998 at University College Dublin. Following this Caraher worked as a postdoctoral researcher at Ottawa Hospital Research Institute and Health Canada. She joined Technological University Dublin in 2003 and in 2008 secured a Science Foundation Ireland–funded Stokes lectureship. Caraher is program chair of applied biology, bioanalysis, and bioanalytical science.

Claire Lennon lectures on organic chemistry and spectroscopic characterization at the undergraduate and postgraduate levels. Lennon places a strong focus on embedding research in her teaching and across the undergraduate curriculum. Lennon has research interests in stereoselective organic synthesis aiming to develop novel green and sustainable methods, supervising PhD students in these areas. Lennon has been a member of the SURE Network since its inception in 2017.

Evelyn Landers is a lecturer in inorganic chemistry and analytical science. Landers coordinates the first year of seven programs across the departments of science and land sciences and is program leader for the common entry science program. Landers is the recipient of the Teaching Hero Award from the National Forum for the Enhancement of Teaching and Learning in Higher Education and the Union of Students in Ireland as well as a Higher Education Innovation award.

Eileen O’Leary holds a PhD in organic chemistry, a master’s degree in teaching and learning and a certificate in coaching and leadership. O’Leary is a member of the SURE Network and its Digital Badge Committee. O’Leary is seconded to the teaching and learning unit at Munster Technological University. She is leading the program Enabling Academic Transitions through Professional Development, aimed at encouraging new staff to take a reflective and student-centered approach to practice by incorporating active learning.

Geraldine Dowling’s research interests are in the fields of forensic science, chemistry education (universal design for learning, community-based learning, and problem-based learning pedagogies), analytical science, metabolomics, and nutrition science. Dowling held posts in various Irish government ISO17025-accredited laboratories for 12 years prior to entering academia. Dowling has trained staff and students in the revenue, customs, and toxicology fields as a forensic practitioner. She also undertakes consultancy and supervises postgraduate students.

Margaret McCallig is a lecturer in occupational safety and health with over 10 years of industry experience in the construction, engineering, medical device, and food manufacturing industries. McCallig holds a BSc in health and safety systems and an MSc for research in occupational hygiene from the University of Galway. McCallig Is currently pursuing a PhD in the area of physical stressors in neonatal intensive care units in Ireland.

Anne Marie O’Brien is a lecturer at the Technological University of the Shannon (TUS) and has been in academia since 2006. O’Brien has an MSc and PhD in toxicology and biochemistry and also holds a postgraduate diploma in learning and teaching. O’Brien chairs the European team-based learning (TBL) collaborative, the SURE Network TBL and Digital Badge Committee, and also is the chair of the TUS Digital Badge Committee.

Valerie McCarthy is a lecturer and program director for the BSc environmental bioscience program at Dundalk Institute of Technology (DkIT). McCarthy is director of the Centre for Freshwater and Environmental Studies at DkIT. McCarthy’s research interests include theoretical community and ecosystem ecology in freshwater systems, investigating the linkages between aquatic systems and their catchments. Her current projects focus on the use of high-frequency and remote-sensing technologies to monitor surface water.

Josephine Treacy is a lecturer at Technological University of the Shannon. Treacy’s qualifications include a graduate diploma in environmental chemistry, MSc in analytical chemistry, PhD in environmental analytical chemistry, diploma in field ecology from University College Cork (UCC), and MEd from Mary Immaculate College, Limerick. Treacy’s previous employment includes postdoctorate research at UCC and being an executive environmental technician with Cork County Council. Her research interests include analytical science, method development, education, and academic writing.

Venturing into Qualitative Research: A Practical Guide to Getting Started

Venturing into Qualitative Research: A Practical Guide to Getting Started

Introduction

We both started our scholarly journeys as biologists. As we trained, we both grew interested in researching undergraduate education and we transitioned to doing education research. We quickly came to realize that our training in experimental approaches and quantitative methods was woefully insufficient to study the diversity of ways students think, believe, value, feel, behave, and change in a variety of learning environments and educational systems.

For instance, there are established ways to quantify some educational variables, but not others. In addition, there may be phenomena at play that we haven’t thought of or that might be counterintuitive, which could lead us to quantify things that end up being irrelevant or meaningless. Herein lies the power of qualitative research. Qualitative research generates new knowledge by enabling rich, multifaceted descriptions of phenomena of interest, known as constructs (i.e., latent, unobservable variables), and producing possible explanations of how phenomena are occurring (i.e., mechanisms or relationships between constructs in different contexts and situations with different individuals and groups).

In this essay, we aim to offer an approachable explanation of qualitative research, including the types of questions that qualitative research is suited to address, the characteristics of robust qualitative research, and guidance on how to get started. We use examples from our own and others’ research to illustrate our explanations, and we cite references where readers can learn more. We expect Scholarship and Practice of Undergraduate Research (SPUR) readers from disciplines with a tradition of qualitative research might question why we would write this piece and what makes us qualified to do so. There are many scholars with much more qualitative research expertise than we have. Yet, we think we can offer a unique perspective to SPUR readers who are new to qualitative research or coming from disciplines where qualitative research is unfamiliar or undervalued. We have both designed, conducted, and published qualitative research in the context of undergraduate education and research experiences. We draw upon this experience in the recommendations we offer here.

Doing qualitative research involves acknowledging your “positionality,” or how your own background, lived experiences, and philosophical understandings of research influence how you approach and interpret the work (e.g., Hampton, Reeping, and Ozkan 2021; Holmes and Darwin 2020). Our positionalities have influenced our approach to this article and qualitative research generally. I (MAP) first learned about qualitative research from my undergraduate academic adviser. She invited me to help her implement and evaluate a capstone course in which groups of microbiology undergraduates engaged in a semester-long research project to address problems faced by community organizations (Watson, Willford, and Pfeifer 2018). At the time, I wasn’t aware of the long-standing history of qualitative research or its different forms and approaches. I just knew that reading quote data helped me understand human experiences in a way that survey numbers did not. Since my introduction to qualitative research, I’ve been fortunate to receive formal training. I consider my most valuable lessons about qualitative research to be through the practical experience of doing qualitative research and being mentored by qualitative researchers.

When I (ELD) first learned about qualitative research, I thought it meant words – perhaps collected through surveys, focus groups, interviews, or class recordings. I thought qualitative research would be easy – it was just words after all, and I had been using words almost my whole life. I assumed if I collected some words and summarized what I thought they meant (think word cloud), I would be doing qualitative research. As we will elaborate here, this is a limited view of what qualitative research is and what qualitative research can accomplish. When I began presenting qualitative research, I found it helpful to draw analogies to qualitative studies in natural science and medical disciplines. For instance, in the field of biology, the invention of technologies (e.g., lenses, microscopes) allowed for detailed observation and rich descriptions of cells (i.e., qualitative research) that led to the development of cell theory, the establishment of the field of cell biology, and quantitative research on cell structure, function, and dysfunction. In my own field of neuroscience, Henry Moliason, known as HM, was the focus of qualitative case study because he lost the ability to form new long-term memories due to a surgical treatment for severe epilepsy. Rich (i.e., comprehensive and detailed) description of Mr. Moliason’s memory impairment was the basis for hippocampal function being proposed as the main mechanism through which memories are formed. These examples of “non-numbery” research that produce influential descriptions and testable mechanisms helped me recognize the potential value and impact of qualitative research.

Types of Qualitative Research Questions

Qualitative research is useful for addressing two main types of questions: descriptive and mechanistic. Descriptive questions ask what is happening, for whom, and in what circumstances. Mechanistic questions ask how a phenomenon of interest happening. Here we explain each type of question and highlight some example studies conducted in the context of undergraduate research.

Descriptive Questions

Descriptive research seeks to elucidate details that enhance our overall understanding of a particular phenomenon—it answers questions about what a phenomenon is, including its defining features (i.e., dimensions) and what makes it distinct from other phenomena (Loeb et al. 2017). Descriptive research can also reveal who experiences the phenomenon, as well as when and where a phenomenon occurs (Loeb et al. 2017). Details like these serve as a starting point for future research, policy development, and enhanced practice. For instance, Hunter, Laursen, and Seymour (2007) carried out a qualitative study that identified and described the benefits of undergraduate research from the perspectives of both students and faculty. This work prompted calls for expansion of undergraduate research nationally and led to numerous quantitative studies (Gentile, Brenner, and Stephens 2017). Among these were quantitative studies from our group on the influences of research mentors on undergraduate researchers (Aikens et al. 2016, 2017; Joshi, Aikens, and Dolan 2019). Although these studies were framed to identify beneficial outcomes, we observed that undergraduates who had less favorable experiences with mentors were opting not to participate in our studies. Given this observation and the dearth of research on negative experiences in undergraduate research, we carried out a descriptive qualitative study of the dimensions (i.e., the what) of negative mentoring—that is, problematic or ineffective mentoring—in undergraduate life science research (Limeri et al. 2019). This study revealed that negative mentoring in undergraduate research included the absence of support from mentors and actively harmful mentor behaviors. These results served as the basis for practical guidance on how to curtail negative mentoring and its effects and for ongoing quantitative research. We use this study as the basis for the extended examples highlighted in Table 1.

Descriptive research is also suited to investigating the experiences of groups that are marginalized or minoritized in higher education. These studies offer insights into student experiences that may be otherwise overlooked or masked in larger quantitative studies (Vaccaro et al. 2015). For example, descriptive qualitative research shed light on how Black women in undergraduate and graduate STEM programs recognized and responded to structural racism, sexism, and race-gender bias. This research identified how high-achieving Black STEM students experienced racial battle fatigue and offered program-level suggestions for how to better support Black students (McGee and Bentley 2017). Descriptive qualitative research of deaf students involved in undergraduate research revealed that lack of awareness of Deaf culture of research mentors as well as lack of communication hindered students’ research experiences (Majocha et al. 2018). This research led to recommendations for research programs, research mentors, and students themselves. Another descriptive qualitative study showed how Latine students’ science identity changed over time when involved in an undergraduate research program (Vasquez-Salgado et al. 2023). Specifically, Vasguez-Salgado and colleagues identified patterns in students’ science identity through three waves of data collection spanning 18 months. Students’ identities showed consistent or fast achievement of feeling like a scientist, gradual achievement of feeling like a scientist, achievement adjustment of feeling like a scientist at one point and less so later in the program, or never feeling like a scientist. Together, these and other studies have generated knowledge that raises questions for future research and informs our collective efforts to make undergraduate research more accessible and inclusive.

Mechanistic Questions

Mechanistic qualitative research aims to address questions of how or why a phenomenon occurs. In the context of undergraduate research, an investigator may seek to understand how or why a particular practice or program design affects students. Recently, we conducted a mechanistic qualitative study that aimed, in part, to understand how early career researchers (undergraduate, postbaccalaureate, and graduate students) conceptualized their science identity (Pfeifer et al. 2023). Previous research theorized that someone is more likely to identify as a scientist if they are interested in science, believe they are competent in and can perform science, and feel recognized by others for their scientific aptitude or accomplishments (Carlone and Johnson 2007; Hazari et al. 2010; Potvin and Hazari 2013). However, this theory is somewhat limited in that it does not fully explain how context affects science identity or how science identity evolves, especially as researchers advance in their scientific training (Hazari et al. 2020; Kim and Sinatra 2018). To address this, we integrated science identity theory with research on professional identity development to design our study (Pratt, Rockmann, and Kaufmann 2006). We analyzed data from two national samples, including open-ended survey responses from 548 undergraduates engaged in research training and interview data from 30 early career researchers in the natural sciences. We found that they conceptualized science identity as a continuum that encompassed being a science student, being a science researcher, and being a career researcher. How students saw their science identity depended on how they viewed the purpose of their daily research, the level of intellectual responsibility they have for their research, and the extent of their autonomy in their research. We consider these findings to be hypotheses that can be tested quantitatively to better understand science identity dynamics in research training contexts. By asking this mechanistic question about science identity, we sought to add to and refine existing theory.

Key Attributes of Qualitative Research

For any type of research to be meaningful, it must possess some degree of rigor—what qualitative researchers call trustworthiness (Morse et al. 2002; Yilmaz 2013). Qualitative research is more trustworthy if it is characterized by credibility, transferability, dependability, and confirmability (Creswell and Poth 2016; Lincoln and Guba 1985). For instance, like accuracy and precision in quantitative research, do qualitative findings reflect what is being studied and are the interpretations true to the data (credibility)? Similar to reproducibility in quantitative research, how can qualitative research findings be applied to similar contexts (transferability)? Like validity in quantitative research, to what degree are the framing, methods, and findings of qualitative research appropriate given the aims (dependability)? Similar to the idea of replicability in quantitative research, if the same analytic tools were applied to the same data set could similar findings be reached by someone outside the original research team (confirmability)? The exact dimensions of trustworthiness, how trustworthiness manifests in the research process, the best ways to achieve trustworthiness, and how to talk about trustworthiness in research products are the subject of ongoing and often-spirited debate (e.g., Gioia et al. 2022; Mays and Pope 2020; Morse et al. 2002; Ritchie et al. 2013; Tracy 2010; Welch 2018; Yadav 2022). Central to these dialogues is the fact that qualitative research is composed of different philosophical approaches that emerged and evolved from diverse social science fields (Creswell and Poth 2016; Ritchie et al. 2013). Identifying universally agreed-upon criteria and the means to achieve these criteria is complex.

In our own work, we have found Tracy’s (2010) eight criteria for excellent qualitative research particularly useful. These criteria have helped us design studies, make decisions during the course of research, and articulate in our papers how our research seeks to achieve trustworthiness (e.g., Pfeifer, Cordero, and Stanton 2023). The full list of criteria is: worthy topic, rich rigor, sincerity, credibility, resonance, significant contribution, ethical conduct, and meaningful coherence (Tracy 2010). These criteria borrow from and build on the presented concepts of credibility, transferability, dependability, and confirmability. In our view, these criteria are presented and described in a way that makes sense to us and fits our approach to research. Here we highlight two criteria that may be particularly relevant if you are new to qualitative research.

Worthy Topics

As scholars familiar with undergraduate research and scholarly inquiry, SPUR readers are well-positioned to design studies that address research questions that are significant and timely in the context of undergraduate research. The first step in doing qualitative research (or any research) is to figure out what you want to study. You’ll want to select a topic that you find interesting, relevant, or otherwise compelling so you are motivated to spend time and effort investigating it. One way to find a topic is to notice what is happening in your environment and your work. What are you observing about undergraduate research? Something about students who participate (or not)? Something about colleagues who work with undergraduate researchers (or not)? Something about the design, implementation, or outcomes of the research experience? Something about the programmatic or institutional context? For a topic to be worthy of research, it should be interesting to you and to others. Consider sharing your observations with a few critical friends (i.e., trusted colleagues who will give you honest feedback) about whether they find your observations interesting or worth your time and energy to explore.

Ethics

Like other human research, qualitative studies must adhere to basic ethical principles of respect for persons, beneficence, and justice (National Commission for the Protection of Human Subjects 1978). Respect for persons means treating all people as autonomous and protecting individuals with diminished autonomy (e.g., students whom we teach and assess). Beneficence involves treating people in an ethical manner, including respecting their decisions, protecting them from harm, and securing their well-being. Justice refers to the balance between benefiting from research and bearing its burdens; in other words, people should be able to benefit from research and should not be expected to bear the burden of research if they cannot benefit. Although it is beyond the scope of this essay to provide guidance on how to adhere to these principles, it is important to recognize that qualitative methods like interviewing can be highly personal and sometimes powerful experiences for both participants (and researchers). Investigators should carefully consider how their participants may be affected by data collection. For example, you may interview or survey participants about a personally difficult or painful experience. Do you then bear responsibility for helping them find support to navigate these difficulties? What if a participant reveals to you a serious mental health issue or physical safety concern? These situations occurred during our negative mentoring studies. We provided information to participants about where they could seek counseling or support for specific issues that can occur with mentors, such as harassment and discrimination.

Certainly not all qualitative data collection brings up these issues, but it can and does happen more frequently than you might expect. Your institutional review board (IRB), collaborators, and critical friends can be helpful resources when planning for and navigating tough scenarios like this. If working with an IRB is new to you, we recommend finding colleagues at your institution who have conducted IRB-reviewed research and asking them for guidance and examples. Some IRBs offer training for individuals new to developing human research protocols, and there are likely to be templates for everything from recruitment letters to consent forms to study information. We have found the process of developing IRB protocols helps refine research questions and study plans. Furthermore, IRB review is needed before you collect data that will be used for your study; IRBs rarely if ever allow for retrospective review and approval. In our experience, these studies are likely to be determined as exempt from IRB review because they involve minimal risk and use standard educational research procedures. However, the IRB is still responsible for making this determination and is a valuable partner for helping investigators navigate sensitive or complex situations that occur in human research.

Getting Started with Qualitative Research

Now that you have a sense of the purposes of qualitative research and what features help to ensure its quality, you are probably wondering how to do it. We want to emphasize that there are entire programs of study, whole courses, and lengthy texts that aim to teach qualitative research. We cannot come close to describing what can be learned from these more substantial resources. With this is mind, we share our own process of carrying out qualitative research as an example that others might find helpful to follow. We outline this “how to” as a series of steps, but qualitative research (like all research) is iterative and dynamic (University of California Museum of Paleontology 2022). Feel free to read through the steps in a linear fashion but then move in non-linear ways through the various steps. Extended discussion of each of these steps with examples from our research on negative mentoring is provided in Table 1 along with an abridged list of our go-to references.

Observe, Search, and Read

For a topic to be worthy of qualitative research (or any research), it should also have the potential to address a knowledge gap. After we identify a “worthy topic,” we try to find as much information about that topic as possible (Dolan 2013). We read, then we keep reading, and then we read some more. This may seem obvious, but we find that investing time reading literature can save us a lot of time designing, conducting, and writing up a study on a phenomenon that is already well known or understood by others and just not (yet) by us. To help us in our searching, we will sometimes reach out to colleagues in related fields to describe the phenomenon we are interested in studying and see if they have terms that they use to describe the phenomenon or theories they think are related. Theory informs our research questions, study designs, analytic approaches, and interpretation and reporting of findings, and enables alignment among all of these elements of research (e.g., Grant and Osanloo 2014; Luft et al. 2022; Spangler and Williams 2019). Theory also serves as a touchstone for connecting our findings to larger bodies of knowledge and communicating these connections in a way that promotes collective understanding of whatever we are investigating.

Formulate a Question

Once you have selected a topic and identified a knowledge gap, consider research questions that, if answered, would address the knowledge gap. Recall that qualitative research is suited to questions that require a descriptive (what) or mechanistic (how) answer.

Decide on a Study Design

Just like quantitative research, qualitative research has characteristic approaches, designs, and methodologies, each of which has affordances and constraints (Creswell and Poth 2016; Merriam 2014; Miles, Huberman, and Saldana 2014). Creswell and Poth provide a valuable resource for learning more about different types of qualitative research study designs, including which designs are suited to address which kinds of research questions. Given the labor intensiveness of qualitative data collection and analysis, it is critical to think carefully about how to recruit and select study participants. What this looks like and who might be appropriate study participants will depend on many factors, including the knowledge gap, research question, study design, and methods. Questions that can be helpful to ask are: Who do I need to study to answer my research question? What should the study participants have in common? In what ways should study participants vary to provide rich, complex, and varied insight into what I am studying? To whom do I want to generalize my findings, keeping in mind the qualitative nature of the work?

Based on the answers to these questions, you may opt for purposeful sampling in which you collect data only from participants who meet the characteristics you decide upon given the aims of your study. In this case, you will likely send a screening survey to potential participants to determine what their characteristics of interest are, which will help you decide if you will invite them for further data collection or not. A purposeful sample contrasts with a convenience sample where essentially any person who agrees to participate in the study will be selected for further data collection.

Collect and Analyze Data Systematically

Qualitative data can be collected in a variety of ways, including surveys, interviews, and focus groups, as well as audio and video recordings of learning experiences such as class sessions. To decide which method(s) to use for data collection, it is helpful to consider what you aim to learn from study participants. Surveys tend to be easier to distribute to a larger sample, but may elicit shorter or shallower responses, which are challenging to interpret because there is less information (i.e., words) and no opportunity to clarify with participants. Focus groups can be effective for quickly gathering input from a group of participants. However, social dynamics may result in one or a few people dominating the discussion, or “group think,” when people agree with one another rather than providing their own unique perspectives. Interviews with individuals can be a rich and varied data source because each participant has time and space to offer their own distinct perspective. Interviews also allow for follow-up questions that are difficult through survey methods. Yet, conducting interviews skillfully—avoiding leading questions and ensuring that the line of questioning yields the desired data—takes a lot of thought and practice. Kvale (1996) offers detailed guidance on how to design and carry out research interviews. Observing an expert interviewer and having them observe and give feedback as you interview can help improve your skills. Audio and video recordings of learning experiences like class sessions or group work can provide a plethora of information (e.g., verbal and nonverbal exchanges among students or between students and instructors) in a more natural setting than surveys or interviews. Yet deciding what information will serve as data to answer your research question, or how that large body of data will be systematically analyzed, can be cumbersome.

Regardless of the data collection method, you’ll need to decide how much data to collect. There is no one right sample size. A good rule of thumb is collecting data until you reach “saturation,” which is the notion that the same ideas are coming up repeatedly and that no new ideas are emerging during data collection. This means that your data collection and analysis are likely to overlap in time, with some data collection then some analysis and then more data collection.

Analytic methods in qualitative research vary widely in their interpretive complexity. As natural scientists, we favor sticking close to the data and analyzing using a method called qualitative content analysis. Content analysis involves taking quotes or segments of text and capturing their meaning with short words or phrases called codes. The process of developing codes and systematically applying them to a dataset is called coding. Coding is highly iterative and time-consuming because it typically requires multiple, careful passes through the dataset to ensure all codes have been evenly applied to all data. In a recent study, we spent 10 to 15 person-hours to code a single interview, and about 400 person-hours to complete coding for a 30-participant study. The time involved in coding depends on what is being studied, the type of coding, and who is coding the data. Saldaña (2016) provides excellent guidance on the coding process, including various ways of making sense of codes by grouping them into themes. Content analysis is just one approach to qualitative data analysis. We encourage you to learn more about different forms of qualitative approaches and choose what works best for you, including your skill level, research goals, and data (e.g., Creswell and Poth 2016; Starks and Brown Trinidad 2007).

Interpret and Write Results

There are many ways to effectively write up results, often called findings, from qualitative research. Because qualitative research involves extensive interpretation, it can sometimes be easier to integrate the results and discussion of a qualitative paper. Integration allows the interpretation (discussion) to be directly supported by the evidence in the form of quotations (results). The conclusions of the paper should avoid repeating the results and instead comment on the implications and applications of the findings: why they matter and what to do as a result. Because qualitative data are quotations rather than numbers, qualitative papers tend to be longer than papers presenting quantitative studies. That said, qualitative papers should still aim to be succinct. For instance, depending on the approach and methods, quotations can be lightly edited to remove extra words or filler language (e.g., um, uh) that is a natural part of language but otherwise irrelevant to the findings. Presenting only the most pertinent part of a quotation not only facilitates succinctness, but helps readers attend to the specific evidence that supports the claims being made. Another strategy to shorten qualitative papers is to present some findings in supplemental materials.

Final Recommendations

In closing our article, we offer some advice that we wish we knew when we began conducting qualitative research. We hope that these recommendations will help you think through issues that are likely emerge as you delve deeper into qualitative analysis, both as a producer and a consumer of qualitative research.

Consensus Coding in Qualitative Analysis

In qualitative analysis, we work to ensure that the analysis yields trustworthy findings by coding to consensus, meaning that the analytic team reaches 100 percent agreement on the application of each code to the data. Any disagreement between coders is discussed until a resolution is resolved. In some cases, these discussions may result in a code description being redefined. Redefinition of a code requires that all data previously coded using the original code be reanalyzed to ensure fit with the revised definition. As you might imagine, coding to consensus can be time-consuming. Yet, in our experience, the time invested in coding to consensus is well spent because the analysis yields deeper insights about the data and phenomenon being investigated. We also see coding to consensus as a great way to take advantage of the diverse viewpoints that team members bring to our research. By coding to consensus, we consider multiple interpretations of the data throughout the analysis process. We are well-positioned to develop theory (as appropriate for our study design) as a team because we all have engaged in meaningful conversations about our findings throughout analysis.

Some qualitative research relies on a calculated measure of intercoder reliability (ICR) instead of coding to consensus. ICR values indicate how often a set of coders agree on the application of a code in the dataset. This quantification of coding is tempting because we love numbers, yet it can also be problematic (O’Connor and Joffe 2020). For instance, aiming for high ICR can create situations when coders are pressured to agree with each other rather than bringing their own unique perspective to the coding process (e.g., Belur et al. 2018; Morse 1997). Quantifying qualitative work also can imply a false precision in the analysis. In some research, ICR is calculated partway through the analysis to determine whether an “acceptable” level of agreement has been reached, at which point the remainder of the data are coded by just one researcher. This approach of using ICR as a cut-off runs counter to what many argue is the value of qualitative research: generating new theoretical understandings informed by multiple perspectives.

Using Numbers in Qualitative Analysis

Although numbers certainly have a place in qualitative analysis (Sandelowski 2001), we encourage researchers to move beyond word clouds or frequency counts of codes and themes in their results for two reasons. First, a code or theme that is infrequently observed in the data set can still be important to the phenomenon being studied. As an analogy, consider making qualitative observations of living cells under a typical light microscope. We would most frequently see a relatively stationary cell that is punctuated by a relatively rare cell division or mitosis. If we only reported stationary observations in findings, we would overlook describing mitosis, one of the most dynamic and fundamental processes that cells display. Second, given limited sample sizes, it may be that a unique and important code or theme is reported by only one participant in the data set. In fact, rare observations can serve as “a-ha moments” that lead to a more comprehensive understanding of the phenomenon under investigation. These rare observations also may inspire new studies about topics that were not initially anticipated; this speaks to the value of qualitative research.

Closing Thoughts

We encourage readers to continue to learn about qualitative research as there is much that could not be addressed in a single article. For instance, we did not introduce how philosophical stances, like how someone views the nature of truth or what counts as evidence, influence the research process. (Creswell and Poth 2016). For now, we will close with one final piece of advice. We both became better qualitative researchers by working with mentors and collaborators who have this expertise. We encourage you to find colleagues in your networks or at your institutions who may be interested in being a collaborator, mentor, or critical friend. The complexity of students and their experiences lend themselves to qualitative approaches. We hope this article might serve as an impetus for you to learn more about qualitative research and even start your own investigations.

Data Availability Statement

The data included in this commentary have been published in an open-access journal under a Creative Commons license. Citations are included in the text.

Institutional Review Board Statement

Not applicable.

Conflict of Interest Statement

The authors have no conflicts of interest to report.

Acknowledgments

This material is based upon work supported by the National Science Foundation under award number OCE-2019589. This is the National Science Foundation’s Center for Chemical Currencies of a Microbial Planet (C-Comp) publication #026. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We thank Patricia Mabrouk for inviting us to contribute this commentary. We thank members of the Biology Education Research Group at the University of Georgia and Daniel Dries, Joseph Provost, and Verónica Segarra for their thoughtful feedback on manuscript drafts.

References

Agee, Jane. 2009. “Developing Qualitative Research Questions: A Reflective Process.” International Journal of Qualitative Studies in Education 22: 431–447. doi: 10.1080/09518390902736512

Aikens, Melissa L., Melissa M. Robertson, Sona Sadselia, Keiana Watkins, Mara Evans, Christopher R. Runyon, Lillian T. Eby, and Erin L. Dolan. 2017. “Race and Gender Differences in Undergraduate Research Mentoring Structures and Research Outcomes.” CBE—Life Sciences Education 16(2): ar34. doi:10.1187/cbe.16-07-0211

Aikens, Melissa L., Sona Sadselia, Keiana Watkins, Mara Evans, Lillian T. Eby, and Erin L. Dolan. 2016. “A Social Capital Perspective on the Mentoring of Undergraduate Life Science Researchers: An Empirical Study of Undergraduate–Postgraduate–Faculty Triads.” CBE—Life Sciences Education 15(2): ar16. doi: 10.1187/cbe.15-10-0208

Anfara, Vincent A., Kathleen M. Brown, and Terri L. Mangione. 2002. “Qualitative Analysis on Stage: Making the Research Process More Public.” Educational Researcher 31(7): 28–38. doi:10.3102/0013189X031007028

Belur, Jyoti, Lisa Tompson, Amy Thornton, and Miranda Simon. 2018. “Interrater Reliability in Systematic Review Methodology: Exploring Variation in Coder Decision-Making.” Sociological Methods & Research 50: 837–865. doi:10.1177/0049124118799372

Carlone, Heidi B., and Angela Johnson. 2007. “Understanding the Science Experiences of Successful Women of Color: Science Identity as an Analytic Lens.” Journal of Research in Science Teaching 44: 1187–1218. doi: 10.1002/tea.20237

Castillo-Montoya, Milagros. 2016. “Preparing for Interview Research: The Interview Protocol Refinement Framework.” Qualitative Report 21: 811–831. doi: 10.46743/2160-3715/2016.2337

Charmaz, Kathy. 2006. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. London: Sage.

Creswell, John W., and Cheryl N. Poth. 2016. Qualitative Inquiry and Research Design: Choosing among Five Approaches. Sage.

Dolan, Erin L. 2013. “Biology Education Scholarship.” IBiology. https://www.ibiology.org/career-exploration/biology-educationscholarship

Gentile, Jim, Kerry Brenner, and Amy Stephens, eds. 2017. Undergraduate Research Experiences for STEM Students: Successes, Challenges, and Opportunities. Washington, DC: National Academies Press. https://www.nap.edu/catalog/24622/undergraduate-research-experiences-for-stem-students-successes-challenges-and-opportunities

Gioia, Denny, Kevin Corley, Kathleen Eisenhardt, Martha Feldman, Ann Langley, Jane Lê, Karen Golden-Biddle, et al. 2022. “A Curated Debate: On Using ‘Templates’ in Qualitative Research.” Journal of Management Inquiry 31: 231–52. doi:10.1177/10564926221098955

Goldberg, Abbie E., and Katherine R. Allen. 2015. “Communicating Qualitative Research: Some Practical Guideposts for Scholars.” Journal of Marriage and Family 77 (1): 3–22. doi:10.1111/jomf.12153

Grant, Cynthia, and Azadeh Osanloo. 2014. “Understanding, Selecting, and Integrating a Theoretical Framework in Dissertation Research: Creating the Blueprint for Your ‘House.’” Administrative Issues Journal 4(2): 4. https://dc.swosu.edu/aij/vol4/iss2/4

Hampton, Cynthia, David Reeping, and Desen Sevi Ozkan. 2021. “Positionality Statements in Engineering Education Research: A Look at the Hand That Guides the Methodological Tools.” Studies in Engineering Education 1(2): 126–141. doi: 10.21061/see.13

Hazari, Zahra, Deepa Chari, Geoff Potvin, and Eric Brewe. 2020. “The Context Dependence of Physics Identity: Examining the Role of Performance/Competence, Recognition, Interest, and Sense of Belonging for Lower and Upper Female Physics Undergraduates.” Journal of Research in Science Teaching 57:1583–1607. doi: 10.1002/tea.21644

Hazari, Zahra, Gerhard Sonnert, Philip M. Sadler, and Marie-Claire Shanahan. 2010. “Connecting High School Physics Experiences, Outcome Expectations, Physics Identity, and Physics Career Choice: A Gender Study.” Journal of Research in Science Teaching 47: 978–1003. doi: 10.1002/tea.20363

Holmes, Andrew, and Gary Darwin. 2020. “Researcher Positionality: A Consideration of Its Influence and Place in Qualitative Research; A New Researcher Guide.” Shanlax International Journal of Education 8(4): 1–10. doi: 10.34293/education.v8i4.3232

Hunter, Anne-Barrie, Sandra L. Laursen, and Elaine Seymour. 2007. “Becoming a Scientist: The Role of Undergraduate Research in Students’ Cognitive, Personal, and Professional Development.” Science Education 91: 36–74. doi: 10.1002/sce.20173

Joshi, Megha, Melissa L. Aikens, and Erin L. Dolan. 2019. “Direct Ties to a Faculty Mentor Related to Positive Outcomes for Undergraduate Researchers.” BioScience 69: 389–397. doi10.1093/biosci/biz039

Kim, Ann Y., and Gale M. Sinatra. 2018. “Science Identity Development: An Interactionist Approach.” International Journal of STEM Education 5: 51. doi: 10.1186/s40594-018-0149-9

Knott, Eleanor, Aliya Hamid Rao, Kate Summers, and Chana Teeger. 2022. “Interviews in the Social Sciences.” Nature Reviews Methods Primers 2: 73. doi: 10.1038/s43586-022-00150-6

Korstjens, Irene, and Albine Moser. 2017. “Series: Practical Guidance to Qualitative Research. Part 2: Context, Research Questions and Designs.” European Journal of General Practice 23: 274–279. doi: 10.1080/13814788.2017.1375090

Kvale, Steinar. 1996. InterViews: An Introduction to Qualitative Research Interviewing. Thousand Oaks, CA: Sage.

Kyngäs, Helvi, Kristina Mikkonen, and Maria Kääriäinen, eds. 2020. The Application of Content Analysis in Nursing Science Research. Cham: Springer International. doi: 10.1007/978-3-030-30199-6

Limeri, Lisa B., Muhammad Zaka Asif, Benjamin H. T. Bridges, David Esparza, Trevor T. Tuma, Daquan Sanders, Alexander J. Morrison, Pallavi Rao, Joseph A. Harsh, and Adam V. Maltese. 2019. “‘Where’s My Mentor?!’ Characterizing Negative Mentoring Experiences in Undergraduate Life Science Research.” CBE—Life Sciences Education 18(4): ar61. doi: 10.1187/cbe.19-02-0036

Lincoln, Yvonna S., and Egon G. Guba. 1985. Naturalistic Inquiry. Sage.

Loeb, Susanna, Susan Dynarski, Daniel McFarland, Pamela Morris, Sean Reardon, and Sarah Reber. 2017. “Descriptive Analysis in Education: A Guide for Researchers.” NCEE 2017-4023. National Center for Education Evaluation and Regional Assistance.

Luft, Julie A., Sophia Jeong, Robert Idsardi, and Grant Gardner. 2022. “Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks: An Introduction for New Biology Education Researchers.” CBE—Life Sciences Education 21(3): rm33. doi: 10.1187/cbe.21-05-0134

Majocha, Megan, Zachary Davenport, Derek C. Braun, and Cara Gormally. 2018. “‘Everyone Was Nice . . . But I Was Still Left Out’: An Interview Study about Deaf Interns’ Research Experiences in STEM.” Journal of Microbiology & Biology Education 19(1): 19.1.10. doi: 10.1128/jmbe.v19i1.1381

Mays, Nicholas, and Catherine Pope. 2020. “Quality in Qualitative Research.” In Qualitative Research in Health Care, ed. Catherine Pope and Nicholas Mays, 211–233. doi:10.1002/9781119410867.ch15

McGee, Ebony O., and Lydia Bentley. 2017. “The Troubled Success of Black Women in STEM.” Cognition and Instruction 35: 265–289. doi: 10.1080/07370008.2017.1355211

Merriam, Sharan B. 2014. Qualitative Research: A Guide to Design and Implementation. San Francisco: Wiley.

Miles, Matthew B., A. Michael Huberman, and Johnny Saldana. 2014. Qualitative Data Analysis: A Methods Sourcebook. 3rd ed. Thousand Oaks, CA: Sage.

Morse, Janice M. 1997. “‘Perfectly Healthy, but Dead’: The Myth of Inter-Rater Reliability.” Qualitative Health Research 7:445–47. doi: 10.1177/104973239700700401

Morse, Janice M., Michael Barrett, Maria Mayan, Karin Olson, and Jude Spiers. 2002. “Verification Strategies for Establishing Reliability and Validity in Qualitative Research.” International Journal of Qualitative Methods 1(2): 13–22. Doi:10.1177/160940690200100202

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1978. “The Belmont Report: Ethical Principles and Guidelines for the Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.” 3 vols. Bethesda, MD: National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. https://repository.library.georgetown.edu/handle/10822/779133

O’Connor, Cliodhna, and Helene Joffe. 2020. “Intercoder Reliability in Qualitative Research: Debates and Practical Guidelines.” International Journal of Qualitative Methods 19:1609406919899220. doi: 10.1177/1609406919899220

Pfeifer, Mariel A., Julio J. Cordero, and Julie Dangremond Stanton. 2023. “What I Wish My Instructor Knew: How Active Learning Influences the Classroom Experiences and Self-Advocacy of STEM Majors with ADHD and Specific Learning Disabilities.” CBE—Life Sciences Education 2(1): ar2. doi: 10.1187/cbe.21-12-0329

Pfeifer, Mariel A., C. J. Zajic, Jared M. Isaacs, Olivia A. Erickson, and Erin L. Dolan. 2023. “Beyond Performance, Competence, and Recognition: Forging a Science Researcher Identity in the Context of Research Training.” BioRxiv 2023.03.22.533783. doi: 10.1101/2023.03.22.533783

Potvin, Geoff, and Zahra Hazari. 2013. “The Development and Measurement of Identity across the Physical Sciences.” 2013 PERC Proceedings. American Association of Physics Teachers. https://www.compadre.org/Repository/document/ServeFile.cfm?ID=13182&DocID=3729

Pratt, Michael G., Kevin W. Rockmann, and Jeffrey B. Kaufmann. 2006. “Constructing Professional Identity: The Role of Work and Identity Learning Cycles in the Customization of Identity among Medical Residents.” Academy of Management Journal 49: 235–262. doi: 10.5465/AMJ.2006.20786060

Ritchie, Jane, Jane Lewis, Carol McNaughton Nicholls, and Rachel Ormston. 2013. Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage.

Roulston, Kathryn, Kathleen deMarrais, and Jamie B. Lewis. 2003. “Learning to Interview in the Social Sciences.” Qualitative Inquiry 9: 643–668. doi: 10.1177/1077800403252736

Saldaña, Johnny. 2016. The Coding Manual for Qualitative Researchers. 3rd ed. Los Angeles: Sage.

Sandelowski, Margarete. 1995. “Qualitative Analysis: What It Is and How to Begin.” Research in Nursing & Health 18: 371–375. doi: 10.1002/nur.4770180411

Sandelowski, Margarete. 1998. “Writing a Good Read: Strategies for Re-Presenting Qualitative Data.” Research in Nursing & Health 21: 375–382. doi: 10.1002/(SICI)1098-240X(199808)21:4<375::AID-NUR9>3.0.CO;2-C

Sandelowski, Margarete. 2001. “Real Qualitative Researchers Do Not Count: The Use of Numbers in Qualitative Research.” Research in Nursing & Health 24: 230–240. doi: 10.1002/nur.1025

Spangler, Denise A., and Steven R. Williams. 2019. “The Role of Theoretical Frameworks in Mathematics Education Research.” In Designing, Conducting, and Publishing Quality Research in Mathematics Education, ed. Keith R. Leatham, 3–16. Research in Mathematics Education. Cham: Springer International. doi:10.1007/978-3-030-23505-5_1

Starks, Helene, and Susan Brown Trinidad. 2007. “Choose Your Method: A Comparison of Phenomenology, Discourse Analysis, and Grounded Theory.” Qualitative Health Research 17: 1372–1380. doi: 10.1177/1049732307307031

Tracy, Sarah J. 2010. “Qualitative Quality: Eight ‘Big-Tent’ Criteria for Excellent Qualitative Research.” Qualitative Inquiry 16:837–851. doi: 10.1177/1077800410383121

University of California Museum of Paleontology. 2022. “Understanding Science: Science Flowchart.” UC Museum of Paleontology Understanding Science. https://undsci.berkeley.edu/science-flowchart

Vaccaro, Annemarie, Ezekiel W. Kimball, Ryan S. Wells, and Benjamin J. Ostiguy. “Researching students with disabilities: The importance of critical perspectives.” New directions for institutional research 2014, no. 163 (2015): 25-41. doi: 10.1002/ir.20084

Vasquez-Salgado, Yolanda, Tissyana C. Camacho, Isabel López, Gabriela Chavira, Carrie L. Saetermoe, and Crist Khachikian. 2023. “‘I Definitely Feel like a Scientist’: Exploring Science Identity Trajectories among Latinx Students in a Critical Race Theory–Informed Undergraduate Research Experience.” Infant and Child Development 32(3): e2371. doi: 10.1002/icd.2371

Watson, Rachel M., John D. Willford, and Mariel A. Pfeifer. 2018. “A Cultured Learning Environment: Implementing a Problem-and Service-Based Microbiology Capstone Course to Assess Process- and Skill-Based Learning Objectives.” Interdisciplinary Journal of Problem-Based Learning 12(1): article 8. doi:10.7771/1541-5015.1694

Welch, Catherine. 2018. “Good Qualitative Research: Opening up the Debate.” In Collaborative Research Design: Working with Business for Meaningful Findings, 401–412. Singapore: Springer. doi: 10.1007/978-981-10-5008-4

Yadav, Drishti. 2022. “Criteria for Good Qualitative Research: A Comprehensive Review.” Asia-Pacific Education Researcher 31 679–689. doi: 10.1007/s40299-021-00619-0

Yilmaz, Kaya. 2013. “Comparison of Quantitative and Qualitative Research Traditions: Epistemological, Theoretical, and Methodological Differences.” European Journal of Education 48: 311–325. doi: 10.1111/ejed.12014

Mariel A. Pfeifer

University of Georgia, mapfeife@olemiss.edu

Mariel A. Pfeifer is a postdoctoral researcher at the University of Georgia’s SPREE (Social Psychology of Research Experiences and Education) Lab. Her passion for biology education research was sparked by her experiences as an undergraduate teaching assistant, a pre-service science teacher, and a disability services coordinator. Soon Pfeifer will begin her new role as an assistant professor of biology at the University of Mississippi.

Erin L. Dolan is a professor of biochemistry and molecular biology and Georgia Athletic Association Professor of Innovative Science Education at the University of Georgia As a graduate student, Dolan volunteered in K–12 schools, which inspired her pursuit of a biology education career. She teaches introductory biology and her research group, the SPREE Lab, works to delineate features of undergraduate and graduate research that influence students’ career decisions.

Step Up for SPUR

Step Up for SPUR

Publishing with SPUR: Start with a Great Research Question

Publishing with SPUR: Start with a Great Research Question

Introduction – Fall 2023

Introduction – Fall 2023

Developing Prerequisite Skills in a CURE through Competency-Based Assignments

Developing Prerequisite Skills in a CURE through Competency-Based Assignments

The Genomics Education Partnership: First Findings on Genomics Research in Community Colleges

The Genomics Education Partnership: First Findings on Genomics Research in Community Colleges