Final Oral Public Examination

Other
Apr 27, 2026
10 - 11 am
PNI A32

“Statistical Inference of Normative Structure in Neural and Behavioral Dynamics”

Advisor: Jonathan Pillow

Abstract:

Behavioral and neural recordings provide only a narrow window onto rich underlying computations. A central challenge is to infer and characterize the computational processes guiding intelligent behavior from partial observations alone. Normative accounts, such as reinforcement learning (RL) and optimal control, provide theories and models to understand these processes. However, inferring internal processes during learning and decision-making is especially challenging, as the data is typically non-stationary, variable and limited. Purely theoretical accounts can be detached from these complexities—something statistical inference methods excel at.

The broad aim of this thesis is to develop a principled approach for inferring normative structure and studying internal processes in neuroscientific data through probabilistic latent variable models. We begin by considering the sparse coding model of early vision, showing how we can leverage variational inference to revisit and improve previous inference methods. Then, we turn to the analysis of dynamics in time-series data. We first establish theoretical foundations for understanding the interplay of control and internal dynamics in partially observed settings. Second, we develop probabilistic state-space models designed for the analysis of non-stationary dynamics, such as allowing flexible state cardinality to capture changes in dynamical regimes or directly leveraging time-varying experimental conditions for more interpretable models. Finally, we build on this statistical pipeline and turn to the analysis of de novo task learning in mice, a highly stochastic and understudied regime. We embed RL within probabilistic state-space models to infer learning rules at the single-animal level from trial-by-trial choices alone. We uncover rich heterogeneity in early learning strategies across stimuli, time, and curricula, highlighting systematic departures that conventional RL models miss.