Learning and replay of state representations in the human brain
Agents can only act and learn efficiently if their internal representations characterise the environment beyond mere sensory observations. Such ‘state’ representations enable the agent to distinguish between observations that appear identical but have different task significance, and reflect task structure through state similarities. In my talk, I will address two questions: what the role of state representations is in learning and decision making, and how they impact memory reactivation. I will present work in which we use novel computational, behavioural and fMRI analysis methods to seek answers. Our results suggest that in the human brain sophisticated internal representations are formed in the orbitofrontal cortex, change with learning and impact value representations, and are reactivated in the hippocampus during fast sequential replay events. Taken together these findings shed light on the interaction between representational and algorithmic foundations of how the brain generates intelligent behaviour.