Rick Gilmore is Professor of Psychology at The Pennsylvania State University. He studies the development of visual perception in infants, children, and adults using behavioral, neural imaging, and computational methods. These are described more fully on his lab website. He co-founded and co-directs the Databrary.org data library, co-founded the Penn State Social, Life, & Engineering Sciences Imaging Center (SLEIC), directs the Open Data and Developmental Science (ODDS) initiative through the Penn State Child Study Center, and advocates for more open, transparent, and reproducible scientific practices, especially in psychology and neuroscience.
Ph.D., Psychology, 1997
Carnegie Mellon University
M.S., Psychology, 1995
Carnegie Mellon University
A.B. magna cum laude, Cognitive Science, 1985
Brown University
We used force matching and verbal reports of finger force to explore a prediction of the iso-perceptual manifold concept, which assumes that stable percepts are associated with a manifold in the afferent-efferent space. Young subjects produced various force magnitudes with the index finger, middle finger, or both fingers together. Further, they reported the force level using a verbal scale and by matching the force with fingers of the contralateral hand. Force matching, but not verbal reports, showed larger variable errors for individual fingers in the two-finger task compared to the single-finger tasks. We discuss possible differences in afferent and efferent contributions to force perception at low and high forces based on the idea of motor control with referent coordinates for the effectors. The difference between the force matching and verbal reports are possibly related to neural circuitry differences between perceiving without action and perceiving-to-act.
In 2019, the Governing Council of the Society for Research in Child Development (SRCD) adopted a Policy on Scientific Integrity, Transparency, and Openness (SRCD, 2019a) and accompanying Author Guidelines on Scientific Integrity and Openness in Child Development (SRCD, 2019b). In this issue, a companion article (Gennetian, Tamis‐LeMonda, & Frank) discusses the opportunities to realize SRCD’s vision for a science of child development that is open, transparent, robust, and impactful. In this article, we discuss some of the challenges associated with realizing SRCD’s vision. In identifying these challenges—protecting participants and researchers from harm, respecting diversity, and balancing the benefits of change with the costs—we also offer constructive solutions.
Interdisciplinary exchange of ideas and tools can accelerate scientific progress. For example, findings from developmental and vision science have spurred recent advances in artificial intelligence and computer vision. However, relatively little attention has been paid to how artificial intelligence and computer vision can facilitate research in developmental science. The current study presents AutoViDev—an automatic video-analysis tool that uses machine learning and computer vision to support video-based developmental research. AutoViDev identifies full body position estimations in real-time video streams using convolutional pose machine-learning algorithms. AutoViDev provides valuable information about a variety of behaviors, including gaze direction, facial expressions, posture, locomotion, manual actions, and interactions with objects. We present a high-level architecture of the framework and describe two projects that demonstrate its usability. We discuss the benefits of applying AutoViDev to large-scale, shared video datasets and highlight how machine learning and computer vision can enhance and accelerate research in developmental science.
I teach the following courses on a regular basis:
I have taught this course in the past:
I have also helped lead the following R workshops:
musings on this and that…
The Play & Learning Across a Year Project
An R package for interacting with the Databrary.org API.
An open, secure data library for sharing video.
An open source tool for video annotation.
What is the emotional environment really like?