We perform t-SNE on the BOLD signal from each of the unique 4,916 (2,900 for subject 4 (CSI4)) scenes trials. The BOLD signal was extracted from each voxel for each ROI for each participant. We visualize our t-SNE results with different categorical labels. Specifically, each t-SNE figure contains the same data points/coordinates, and different labels are purely for visualization purposes.
For more details, please read the t-SNE section in our paper.
First, we examine the similarity space across the different image datasets. Specifically, we will be exploiting the implicit image attributes of these datasets: Scene contains whole scenes, ImageNet is focused on a single object, and COCO is in between with images of multiple objects in an interactive scene. Given the ROIs tend to process visual input with specific properties (e.g., category selectivity) we would expect to see a separation in t-SNE space of the different image datasets, especially with regard to Scenes vs. ImageNet.
Secondly, the t-SNE results are visualized with only ImageNet images, and our labels and categories are broken down further. Specifically, the images are labeled with ImageNet super categories.
If you'd like to view all detailed t-SNE visualizations for all subjects and all ROIs, you can download it here.
Our t-SNE visualizations include