Rick Gilmore is Professor of Psychology at The Pennsylvania State University. He studies the development of visual perception in infants, children, and adults using behavioral, neural imaging, and computational methods. These are described more fully on his lab website. He co-founded and co-directs the Databrary.org data library, co-founded the Penn State Social, Life, & Engineering Sciences Imaging Center (SLEIC), directs the Open Data and Developmental Science (ODDS) initiative through the Penn State Child Study Center, and advocates for more open, transparent, and reproducible scientific practices, especially in psychology and neuroscience.
Ph.D., Psychology, 1997
Carnegie Mellon University
M.S., Psychology, 1995
Carnegie Mellon University
A.B. magna cum laude, Cognitive Science, 1985
Brown University
Sex differences in a variety of psychological characteristics are well-documented, with substantial research focused on factors that affect their magnitude and causes. Particular attention has focused on mental rotation, a measure of spatial cognition, and on activity interests. We studied whether sex differences in visual perception—luminance contrast thresholds and motion duration thresholds—contribute to sex differences in mental rotation and interest in male-typed activities. We confirmed sex differences in vision, mental rotation, and activity interests in a sample of 132 college students. In novel findings, we showed that vision correlated with mental rotation performance in women, that vision was a better predictor of individual differences in mental rotation than sex, and that contrast thresholds correlated with women’s interest in male-typed activities. These results suggest that sex differences in spatial cognition and activity interests may have their roots in basic perceptual processes.
Aided augmentative and alternative communication (AAC) displays are often designed as symmetrical row–column grids, with each square in the grid containing a symbol. To maximize vocabulary on displays, symbols are often placed close to one another, and background color cuing is used to signal/differentiate symbols across different grammatical categories. However, from a visual and developmental standpoint, these display features (close-set symbols and use of background color cues) may not be optimal. In particular, placing symbols quite close together may result in visual crowding, in which individual symbols cannot be distinguished due to the presence of many neighbors, or flankers. This research sought to examine the role of display arrangement and background color cuing on the efficiency of visual attention during search.
Video data are uniquely suited for research reuse and for documenting research methods and findings. However, curation of video data is a serious hurdle for researchers in the social and behavioral sciences, where behavioral video data are obtained session by session and data sharing is not the norm. To eliminate the onerous burden of post hoc curation at the time of publication (or later), we describe best practices in active data curation—where data are curated and uploaded immediately after each data collection to allow instantaneous sharing with one button press at any time. Indeed, we recommend that researchers adopt “hyperactive” data curation where they openly share every step of their research process. The necessary infrastructure and tools are provided by Databrary—a secure, web-based data library designed for active curation and sharing of personally identifiable video data and associated metadata. We provide a case study of hyperactive curation of video data from the Play and Learning Across a Year (PLAY) project, where dozens of researchers developed a common protocol to collect, annotate, and actively curate video data of infants and mothers during natural activity in their homes at research sites across North America. PLAY relies on scalable standardized workflows to facilitate collaborative research, assure data quality, and prepare the corpus for sharing and reuse throughout the entire research process.
Psychologists embrace the ethical imperative of protecting research participants from harm. We argue that sharing data should also be considered an ethical imperative. Despite potential risks to participants’ privacy and data confidentiality, sharing data confers benefits to participants and to the community at large by promoting scientific transparency, bolstering reproducibility, and fostering more efficient use of resources. Most of the risks to participants can be mitigated and the benefits of sharing realized through well-established but not yet widespread practices and tools. This chapter serves as a how-to manual for addressing ethical challenges in sharing human data in psychological research in ways that simultaneously protect participants and advance discovery.
In 2019, the Governing Council of the Society for Research in Child Development (SRCD) adopted a Policy on Scientific Integrity, Transparency, and Openness (SRCD, 2019a) and accompanying Author Guidelines on Scientific Integrity and Openness in Child Development (SRCD, 2019b). In this issue, a companion article (Gennetian, Tamis‐LeMonda, & Frank) discusses the opportunities to realize SRCD’s vision for a science of child development that is open, transparent, robust, and impactful. In this article, we discuss some of the challenges associated with realizing SRCD’s vision. In identifying these challenges—protecting participants and researchers from harm, respecting diversity, and balancing the benefits of change with the costs—we also offer constructive solutions.
Interdisciplinary exchange of ideas and tools can accelerate scientific progress. For example, findings from developmental and vision science have spurred recent advances in artificial intelligence and computer vision. However, relatively little attention has been paid to how artificial intelligence and computer vision can facilitate research in developmental science. The current study presents AutoViDev—an automatic video-analysis tool that uses machine learning and computer vision to support video-based developmental research. AutoViDev identifies full body position estimations in real-time video streams using convolutional pose machine-learning algorithms. AutoViDev provides valuable information about a variety of behaviors, including gaze direction, facial expressions, posture, locomotion, manual actions, and interactions with objects. We present a high-level architecture of the framework and describe two projects that demonstrate its usability. We discuss the benefits of applying AutoViDev to large-scale, shared video datasets and highlight how machine learning and computer vision can enhance and accelerate research in developmental science.
I teach the following courses on a regular basis:
I have taught this course in the past:
I am teaching this course in Spring 2023:
I have also helped lead the following R workshops:
With several colleagues, I am planning an Open Science Bootcamp for August 9-11, 2023.
musings on this and that…
bookdown
package makes it easy to build reproducible research protocols.
Open Science at Penn State
The Play & Learning Across a Year Project
An R package for interacting with the Databrary.org API.
An open, secure data library for sharing video.
An open source tool for video annotation.
What is the emotional environment really like?