Abstract

While convolutional neural networks (CNNs) have emerged as powerful models of biological visual cortex, the specific factors of training data that drive similarity remain an open question. By manipulating the realism of virtual environments, motion statistics, and visual optics in training datasets created using the Unity video-game engine, we trained CNNs using a self-supervised learning approach and compared them to neural recordings from the mouse visual cortex. The findings reveal that the realism of the virtual environment substantially increases the similarity between network activations and mouse neural data, while the effects of motion statistics and visual optics are more nuanced and area-specific

Presenter

Parsa Torabian, MASc candidate in Systems Design Engineering

Attend online.

Attending this seminar will count towards the graduate student seminar attendance milestone!