Skip to main content


To understand the neural computations that lead to object understanding and guide flexible, naturalistic behaviors, Ramanujan Srinath deploys AI techniques to inform efficient visual neuroscientific experiments. Using closed-loop neuroscientific experiments (AI generates hypotheses -> experimental data informs AI models) Srinath will test the central hypothesis that learned associations between object-scene properties affect the inference of those properties to guide behaviour.


Ramanujan Srinath’s research focuses on understanding how the brain processes visual information to guide flexible behavior. Srinath uses electrophysiological, psychophysical, and computational techniques to study how the primate visual system processes objects (presented on a screen) and how inferences about those objects are mapped to behavioural outputs in different environmental, cognitive, and task conditions. During Srinath’s Ph.D. in the labs of Drs. Ed Connor and Kristina Nielsen, Srinath studied how 3D object information is extracted from 2D images using single-unit extracellular electrophysiology and two-photon imaging in monkeys. Srinath brought experience with algorithms to generate parameterised, naturalistic 3D visual stimuli to their postdoc in the lab of Dr. Marlene Cohen. The broad goal of Srinath’s research programme is to understand how the visual brain enables people to interact flexibly with the world by inferring relevant properties of 3D objects in naturalistic environments.