How does the brain process different sensory cues, from visual stimulus to social communications, to finally make a relevant decision? My laboratory has focused on this question from several aspects. We aim to understand: 1- the object representation and its temporal dynamics under challenging conditions (e.g., occluded objects), 2- representation of social cues in human communications and 3- the way that the Brain integrates these temporal representations to make a decision and thus a relevant action. To do this, we mainly use computational modelling beside psychophysics, electroencephalography (EEG) and eye tracking techniques.


 Object/Face recognition:

Our previous works showed that feed forward models cannot fully explain the mechanism of object/face representation especially in the challenging conditions. Thus, we aimed to investigate about the possible neural mechanism explaining the brain behavior in challenging conditions like object/face recognition with rotation of views in depth and plane, illumination variations and occlusion.


Decision making:

Object/Face recognition, using represented information over time required a temporal decision making mechanism. This mechanism is different from classical classifiers implemented in the object recognition models. Thus, our second goal is to understand that how the brain integrates temporal sensory information to commit to a choice.


Social Decision Making:

Our minds conveniently interact to each other to use one of the most prevalent and essential information in the daily life: Social Information. Using computational approaches derived primarily form individual decision making, we aimed to understand social decision making by multi-dimensional and inter-connected individual models. Through this approach, our goal is to bridge the gap between decision making in the isolated and social context.