In BOLD5000, we dramatically increase the stimulus set size deployed in an fMRI study of visual scene processing.
- We scanned 4 participants in a slow-evented related design with 4,916 unique scenes.
- Data was collected over 16 sessions, 15 of which were task-related sessions, plus an additional session for acquiring high resolution anatomical scans. Custom headcases* were used to reduce motion and maintain consistent head placement and alignment across sessions.
- In 8 of the 15 task-related sessions, a functional localizer was run in order to independently define scene-selective cortex.
- In each scanning session, participants filled out a questionnaire (Daily Intake) about their daily routine, including: current status regarding food and beverage intake, sleep, exercise, ibuprofen, and comfort in the scanner. During BOLD scanning, physiological data (heart rate and respiration) were also acquired.
Stimuli and Presentation Details
- 4,803 images presented on a single trial throughout the experiment
- 112 images repeated four times, and one image repeated three times
- a total of 5,254 stimuli trials.
The stimuli were drawn from three datasets:
- 1000 images from Scene Images (with scene categories based on SUN categories)
- 2000 images from the COCO dataset
- 1916 images from the ImageNet dataset
Images were presented for 1 second, with 9 seconds of fixation between trials. Participants were asked to judge whether they liked, disliked, or were neutral about the image.
In sum, our dataset is unique in three ways: it is
- significantly larger than existing slow-event related neural datasets by an order of magnitude
- extremely diverse in stimuli
- considerably overlapping with existing computer vision datasets
Our large-scale dataset enables novel neural network training and novel exploration of benchmark computer vision datasets through neuroscience. Finally, the scale advantage of our dataset and the use of a slow event-related design enables, for the first time, joint computer vision and fMRI analyses that span a significant and diverse region of image space using high-performing models.